00:00:00.001 Started by upstream project "autotest-per-patch" build number 132557 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.122 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.124 The recommended git tool is: git 00:00:00.124 using credential 00000000-0000-0000-0000-000000000002 00:00:00.127 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.170 Fetching changes from the remote Git repository 00:00:00.171 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.214 Using shallow fetch with depth 1 00:00:00.214 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.214 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.649 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.660 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.671 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.671 > git config core.sparsecheckout # timeout=10 00:00:05.681 > git read-tree -mu HEAD # timeout=10 00:00:05.696 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.719 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.719 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.801 [Pipeline] Start of Pipeline 00:00:05.817 [Pipeline] library 00:00:05.818 Loading library shm_lib@master 00:00:05.818 Library shm_lib@master is cached. Copying from home. 00:00:05.834 [Pipeline] node 00:00:08.343 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.345 [Pipeline] { 00:00:08.357 [Pipeline] catchError 00:00:08.359 [Pipeline] { 00:00:08.373 [Pipeline] wrap 00:00:08.382 [Pipeline] { 00:00:08.390 [Pipeline] stage 00:00:08.392 [Pipeline] { (Prologue) 00:00:08.407 [Pipeline] echo 00:00:08.409 Node: VM-host-SM38 00:00:08.413 [Pipeline] cleanWs 00:00:08.422 [WS-CLEANUP] Deleting project workspace... 00:00:08.422 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.428 [WS-CLEANUP] done 00:00:08.678 [Pipeline] setCustomBuildProperty 00:00:08.765 [Pipeline] httpRequest 00:00:09.126 [Pipeline] echo 00:00:09.128 Sorcerer 10.211.164.20 is alive 00:00:09.138 [Pipeline] retry 00:00:09.140 [Pipeline] { 00:00:09.152 [Pipeline] httpRequest 00:00:09.156 HttpMethod: GET 00:00:09.157 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.157 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.174 Response Code: HTTP/1.1 200 OK 00:00:09.175 Success: Status code 200 is in the accepted range: 200,404 00:00:09.175 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.757 [Pipeline] } 00:00:28.775 [Pipeline] // retry 00:00:28.785 [Pipeline] sh 00:00:29.069 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.089 [Pipeline] httpRequest 00:00:29.475 [Pipeline] echo 00:00:29.477 Sorcerer 10.211.164.20 is alive 00:00:29.488 [Pipeline] retry 00:00:29.491 [Pipeline] { 00:00:29.506 [Pipeline] httpRequest 00:00:29.511 HttpMethod: GET 00:00:29.512 URL: http://10.211.164.20/packages/spdk_97329b16b79b647608552d1c490f7330aaf30ec8.tar.gz 00:00:29.512 Sending request to url: http://10.211.164.20/packages/spdk_97329b16b79b647608552d1c490f7330aaf30ec8.tar.gz 00:00:29.528 Response Code: HTTP/1.1 200 OK 00:00:29.529 Success: Status code 200 is in the accepted range: 200,404 00:00:29.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_97329b16b79b647608552d1c490f7330aaf30ec8.tar.gz 00:00:43.167 [Pipeline] } 00:00:43.187 [Pipeline] // retry 00:00:43.195 [Pipeline] sh 00:00:43.471 + tar --no-same-owner -xf spdk_97329b16b79b647608552d1c490f7330aaf30ec8.tar.gz 00:00:46.773 [Pipeline] sh 00:00:47.051 + git -C spdk log --oneline -n5 00:00:47.051 97329b16b bdev/malloc: malloc_done() uses switch-case for clean up 00:00:47.051 afdec00e1 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:00:47.051 b09de013a nvmf: Get metadata config by not bdev but bdev_desc 00:00:47.051 971ec0126 bdevperf: Add hide_metadata option 00:00:47.051 894d5af2a bdevperf: Get metadata config by not bdev but bdev_desc 00:00:47.071 [Pipeline] writeFile 00:00:47.089 [Pipeline] sh 00:00:47.378 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:47.388 [Pipeline] sh 00:00:47.666 + cat autorun-spdk.conf 00:00:47.666 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.666 SPDK_TEST_NVMF=1 00:00:47.666 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.666 SPDK_TEST_URING=1 00:00:47.666 SPDK_TEST_USDT=1 00:00:47.666 SPDK_RUN_UBSAN=1 00:00:47.666 NET_TYPE=virt 00:00:47.666 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:47.672 RUN_NIGHTLY=0 00:00:47.674 [Pipeline] } 00:00:47.685 [Pipeline] // stage 00:00:47.698 [Pipeline] stage 00:00:47.700 [Pipeline] { (Run VM) 00:00:47.711 [Pipeline] sh 00:00:47.988 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:47.988 + echo 'Start stage prepare_nvme.sh' 00:00:47.988 Start stage prepare_nvme.sh 00:00:47.988 + [[ -n 8 ]] 00:00:47.988 + disk_prefix=ex8 00:00:47.988 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:47.988 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:47.988 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:47.988 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.988 ++ SPDK_TEST_NVMF=1 00:00:47.988 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.988 ++ SPDK_TEST_URING=1 00:00:47.988 ++ SPDK_TEST_USDT=1 00:00:47.988 ++ SPDK_RUN_UBSAN=1 00:00:47.988 ++ NET_TYPE=virt 00:00:47.988 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:47.988 ++ RUN_NIGHTLY=0 00:00:47.988 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:47.988 + nvme_files=() 00:00:47.988 + declare -A nvme_files 00:00:47.988 + backend_dir=/var/lib/libvirt/images/backends 00:00:47.988 + nvme_files['nvme.img']=5G 00:00:47.988 + nvme_files['nvme-cmb.img']=5G 00:00:47.988 + nvme_files['nvme-multi0.img']=4G 00:00:47.988 + nvme_files['nvme-multi1.img']=4G 00:00:47.988 + nvme_files['nvme-multi2.img']=4G 00:00:47.988 + nvme_files['nvme-openstack.img']=8G 00:00:47.988 + nvme_files['nvme-zns.img']=5G 00:00:47.988 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:47.988 + (( SPDK_TEST_FTL == 1 )) 00:00:47.988 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:47.988 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:47.988 + for nvme in "${!nvme_files[@]}" 00:00:47.988 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:00:47.988 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.988 + for nvme in "${!nvme_files[@]}" 00:00:47.988 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:00:47.988 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.988 + for nvme in "${!nvme_files[@]}" 00:00:47.988 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:00:47.988 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:47.988 + for nvme in "${!nvme_files[@]}" 00:00:47.988 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:00:48.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.558 + for nvme in "${!nvme_files[@]}" 00:00:48.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:00:48.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.558 + for nvme in "${!nvme_files[@]}" 00:00:48.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:00:48.558 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.558 + for nvme in "${!nvme_files[@]}" 00:00:48.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:00:49.123 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.123 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:00:49.123 + echo 'End stage prepare_nvme.sh' 00:00:49.123 End stage prepare_nvme.sh 00:00:49.135 [Pipeline] sh 00:00:49.413 + DISTRO=fedora39 00:00:49.413 + CPUS=10 00:00:49.413 + RAM=12288 00:00:49.413 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:49.413 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -H -a -v -f fedora39 00:00:49.413 00:00:49.413 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:49.413 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:49.413 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:49.413 HELP=0 00:00:49.413 DRY_RUN=0 00:00:49.413 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img, 00:00:49.413 NVME_DISKS_TYPE=nvme,nvme, 00:00:49.413 NVME_AUTO_CREATE=0 00:00:49.413 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img, 00:00:49.413 NVME_CMB=,, 00:00:49.413 NVME_PMR=,, 00:00:49.413 NVME_ZNS=,, 00:00:49.413 NVME_MS=,, 00:00:49.413 NVME_FDP=,, 00:00:49.413 SPDK_VAGRANT_DISTRO=fedora39 00:00:49.413 SPDK_VAGRANT_VMCPU=10 00:00:49.413 SPDK_VAGRANT_VMRAM=12288 00:00:49.413 SPDK_VAGRANT_PROVIDER=libvirt 00:00:49.413 SPDK_VAGRANT_HTTP_PROXY= 00:00:49.413 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:49.413 SPDK_OPENSTACK_NETWORK=0 00:00:49.413 VAGRANT_PACKAGE_BOX=0 00:00:49.413 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:49.413 FORCE_DISTRO=true 00:00:49.413 VAGRANT_BOX_VERSION= 00:00:49.413 EXTRA_VAGRANTFILES= 00:00:49.413 NIC_MODEL=e1000 00:00:49.413 00:00:49.413 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:49.413 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:51.310 Bringing machine 'default' up with 'libvirt' provider... 00:00:51.886 ==> default: Creating image (snapshot of base box volume). 00:00:51.886 ==> default: Creating domain with the following settings... 00:00:51.886 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732652646_dbeb07a7c19f7586076d 00:00:51.886 ==> default: -- Domain type: kvm 00:00:51.886 ==> default: -- Cpus: 10 00:00:51.886 ==> default: -- Feature: acpi 00:00:51.886 ==> default: -- Feature: apic 00:00:51.886 ==> default: -- Feature: pae 00:00:51.886 ==> default: -- Memory: 12288M 00:00:51.886 ==> default: -- Memory Backing: hugepages: 00:00:51.886 ==> default: -- Management MAC: 00:00:51.886 ==> default: -- Loader: 00:00:51.886 ==> default: -- Nvram: 00:00:51.886 ==> default: -- Base box: spdk/fedora39 00:00:51.886 ==> default: -- Storage pool: default 00:00:51.886 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732652646_dbeb07a7c19f7586076d.img (20G) 00:00:51.886 ==> default: -- Volume Cache: default 00:00:51.886 ==> default: -- Kernel: 00:00:51.886 ==> default: -- Initrd: 00:00:51.886 ==> default: -- Graphics Type: vnc 00:00:51.886 ==> default: -- Graphics Port: -1 00:00:51.886 ==> default: -- Graphics IP: 127.0.0.1 00:00:51.886 ==> default: -- Graphics Password: Not defined 00:00:51.886 ==> default: -- Video Type: cirrus 00:00:51.886 ==> default: -- Video VRAM: 9216 00:00:51.886 ==> default: -- Sound Type: 00:00:51.886 ==> default: -- Keymap: en-us 00:00:51.886 ==> default: -- TPM Path: 00:00:51.886 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:51.886 ==> default: -- Command line args: 00:00:51.886 ==> default: -> value=-device, 00:00:51.886 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:51.886 ==> default: -> value=-drive, 00:00:51.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:00:51.886 ==> default: -> value=-device, 00:00:51.886 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.886 ==> default: -> value=-device, 00:00:51.886 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:51.886 ==> default: -> value=-drive, 00:00:51.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:51.886 ==> default: -> value=-device, 00:00:51.886 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.886 ==> default: -> value=-drive, 00:00:51.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:51.886 ==> default: -> value=-device, 00:00:51.886 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.886 ==> default: -> value=-drive, 00:00:51.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:51.886 ==> default: -> value=-device, 00:00:51.886 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.144 ==> default: Creating shared folders metadata... 00:00:52.144 ==> default: Starting domain. 00:00:53.519 ==> default: Waiting for domain to get an IP address... 00:01:08.383 ==> default: Waiting for SSH to become available... 00:01:08.383 ==> default: Configuring and enabling network interfaces... 00:01:11.660 default: SSH address: 192.168.121.123:22 00:01:11.660 default: SSH username: vagrant 00:01:11.660 default: SSH auth method: private key 00:01:13.557 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:20.112 ==> default: Mounting SSHFS shared folder... 00:01:21.047 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:21.047 ==> default: Checking Mount.. 00:01:21.981 ==> default: Folder Successfully Mounted! 00:01:21.981 00:01:21.981 SUCCESS! 00:01:21.981 00:01:21.981 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:21.981 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:21.981 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:21.981 00:01:21.990 [Pipeline] } 00:01:22.005 [Pipeline] // stage 00:01:22.015 [Pipeline] dir 00:01:22.016 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:22.018 [Pipeline] { 00:01:22.031 [Pipeline] catchError 00:01:22.033 [Pipeline] { 00:01:22.045 [Pipeline] sh 00:01:22.350 + vagrant ssh-config --host vagrant 00:01:22.350 + sed -ne '/^Host/,$p' 00:01:22.350 + tee ssh_conf 00:01:24.878 Host vagrant 00:01:24.878 HostName 192.168.121.123 00:01:24.878 User vagrant 00:01:24.878 Port 22 00:01:24.878 UserKnownHostsFile /dev/null 00:01:24.878 StrictHostKeyChecking no 00:01:24.878 PasswordAuthentication no 00:01:24.878 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:24.878 IdentitiesOnly yes 00:01:24.878 LogLevel FATAL 00:01:24.878 ForwardAgent yes 00:01:24.878 ForwardX11 yes 00:01:24.878 00:01:24.889 [Pipeline] withEnv 00:01:24.892 [Pipeline] { 00:01:24.905 [Pipeline] sh 00:01:25.182 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:25.182 source /etc/os-release 00:01:25.182 [[ -e /image.version ]] && img=$(< /image.version) 00:01:25.182 # Minimal, systemd-like check. 00:01:25.182 if [[ -e /.dockerenv ]]; then 00:01:25.182 # Clear garbage from the node'\''s name: 00:01:25.182 # agt-er_autotest_547-896 -> autotest_547-896 00:01:25.182 # $HOSTNAME is the actual container id 00:01:25.182 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:25.182 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:25.182 # We can assume this is a mount from a host where container is running, 00:01:25.183 # so fetch its hostname to easily identify the target swarm worker. 00:01:25.183 container="$(< /etc/hostname) ($agent)" 00:01:25.183 else 00:01:25.183 # Fallback 00:01:25.183 container=$agent 00:01:25.183 fi 00:01:25.183 fi 00:01:25.183 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:25.183 ' 00:01:25.452 [Pipeline] } 00:01:25.473 [Pipeline] // withEnv 00:01:25.483 [Pipeline] setCustomBuildProperty 00:01:25.503 [Pipeline] stage 00:01:25.505 [Pipeline] { (Tests) 00:01:25.527 [Pipeline] sh 00:01:25.803 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:25.817 [Pipeline] sh 00:01:26.093 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:26.105 [Pipeline] timeout 00:01:26.105 Timeout set to expire in 1 hr 0 min 00:01:26.106 [Pipeline] { 00:01:26.145 [Pipeline] sh 00:01:26.426 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:26.685 HEAD is now at 97329b16b bdev/malloc: malloc_done() uses switch-case for clean up 00:01:26.955 [Pipeline] sh 00:01:27.236 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:27.251 [Pipeline] sh 00:01:27.590 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:27.607 [Pipeline] sh 00:01:27.886 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo' 00:01:27.886 ++ readlink -f spdk_repo 00:01:27.886 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:27.886 + [[ -n /home/vagrant/spdk_repo ]] 00:01:27.886 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:27.886 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:27.887 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:27.887 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:27.887 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:27.887 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:27.887 + cd /home/vagrant/spdk_repo 00:01:27.887 + source /etc/os-release 00:01:27.887 ++ NAME='Fedora Linux' 00:01:27.887 ++ VERSION='39 (Cloud Edition)' 00:01:27.887 ++ ID=fedora 00:01:27.887 ++ VERSION_ID=39 00:01:27.887 ++ VERSION_CODENAME= 00:01:27.887 ++ PLATFORM_ID=platform:f39 00:01:27.887 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:27.887 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.887 ++ LOGO=fedora-logo-icon 00:01:27.887 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:27.887 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.887 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:27.887 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.887 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.887 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.887 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:27.887 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.887 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:27.887 ++ SUPPORT_END=2024-11-12 00:01:27.887 ++ VARIANT='Cloud Edition' 00:01:27.887 ++ VARIANT_ID=cloud 00:01:27.887 + uname -a 00:01:27.887 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:27.887 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:28.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:28.456 Hugepages 00:01:28.456 node hugesize free / total 00:01:28.456 node0 1048576kB 0 / 0 00:01:28.456 node0 2048kB 0 / 0 00:01:28.456 00:01:28.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.456 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:28.456 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:28.456 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:28.456 + rm -f /tmp/spdk-ld-path 00:01:28.456 + source autorun-spdk.conf 00:01:28.456 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.456 ++ SPDK_TEST_NVMF=1 00:01:28.456 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.456 ++ SPDK_TEST_URING=1 00:01:28.456 ++ SPDK_TEST_USDT=1 00:01:28.456 ++ SPDK_RUN_UBSAN=1 00:01:28.456 ++ NET_TYPE=virt 00:01:28.456 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.456 ++ RUN_NIGHTLY=0 00:01:28.456 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.456 + [[ -n '' ]] 00:01:28.456 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:28.456 + for M in /var/spdk/build-*-manifest.txt 00:01:28.456 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.456 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:28.456 + for M in /var/spdk/build-*-manifest.txt 00:01:28.456 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.456 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:28.456 + for M in /var/spdk/build-*-manifest.txt 00:01:28.456 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.456 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:28.456 ++ uname 00:01:28.456 + [[ Linux == \L\i\n\u\x ]] 00:01:28.456 + sudo dmesg -T 00:01:28.456 + sudo dmesg --clear 00:01:28.456 + dmesg_pid=4982 00:01:28.456 + [[ Fedora Linux == FreeBSD ]] 00:01:28.456 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.456 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.456 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.456 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.456 + sudo dmesg -Tw 00:01:28.456 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.456 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.456 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.456 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.456 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.456 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.456 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.456 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.456 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.456 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.456 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:28.456 20:24:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:28.456 20:24:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.456 20:24:42 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:28.456 20:24:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:28.456 20:24:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:28.717 20:24:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:28.717 20:24:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:28.717 20:24:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.717 20:24:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.717 20:24:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.717 20:24:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.717 20:24:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.718 20:24:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.718 20:24:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.718 20:24:43 -- paths/export.sh@5 -- $ export PATH 00:01:28.718 20:24:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.718 20:24:43 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:28.718 20:24:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:28.718 20:24:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732652683.XXXXXX 00:01:28.718 20:24:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732652683.nnplB5 00:01:28.718 20:24:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:28.718 20:24:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:28.718 20:24:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:28.718 20:24:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:28.718 20:24:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.718 20:24:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:28.718 20:24:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:28.718 20:24:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.718 20:24:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:28.718 20:24:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:28.718 20:24:43 -- pm/common@17 -- $ local monitor 00:01:28.718 20:24:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.718 20:24:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.718 20:24:43 -- pm/common@25 -- $ sleep 1 00:01:28.718 20:24:43 -- pm/common@21 -- $ date +%s 00:01:28.718 20:24:43 -- pm/common@21 -- $ date +%s 00:01:28.718 20:24:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652683 00:01:28.718 20:24:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652683 00:01:28.718 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652683_collect-cpu-load.pm.log 00:01:28.718 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652683_collect-vmstat.pm.log 00:01:29.702 20:24:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:29.702 20:24:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.702 20:24:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.702 20:24:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:29.702 20:24:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.702 Tue Nov 26 08:24:44 PM UTC 2024 00:01:29.702 20:24:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.702 v25.01-pre-263-g97329b16b 00:01:29.702 20:24:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.702 20:24:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.702 20:24:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.702 20:24:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.702 20:24:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.702 20:24:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.702 ************************************ 00:01:29.702 START TEST ubsan 00:01:29.702 ************************************ 00:01:29.702 using ubsan 00:01:29.702 20:24:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:29.702 00:01:29.702 real 0m0.000s 00:01:29.702 user 0m0.000s 00:01:29.702 sys 0m0.000s 00:01:29.702 20:24:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:29.702 20:24:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.702 ************************************ 00:01:29.702 END TEST ubsan 00:01:29.702 ************************************ 00:01:29.702 20:24:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.702 20:24:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.702 20:24:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.702 20:24:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.702 20:24:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.702 20:24:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.702 20:24:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.702 20:24:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.702 20:24:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:29.702 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:29.702 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:29.964 Using 'verbs' RDMA provider 00:01:43.222 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:53.325 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:53.325 Creating mk/config.mk...done. 00:01:53.325 Creating mk/cc.flags.mk...done. 00:01:53.325 Type 'make' to build. 00:01:53.325 20:25:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:53.325 20:25:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.325 20:25:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.325 20:25:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.325 ************************************ 00:01:53.325 START TEST make 00:01:53.325 ************************************ 00:01:53.325 20:25:07 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:53.325 make[1]: Nothing to be done for 'all'. 00:02:05.630 The Meson build system 00:02:05.630 Version: 1.5.0 00:02:05.630 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.630 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.630 Build type: native build 00:02:05.630 Program cat found: YES (/usr/bin/cat) 00:02:05.630 Project name: DPDK 00:02:05.630 Project version: 24.03.0 00:02:05.630 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.630 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.630 Host machine cpu family: x86_64 00:02:05.630 Host machine cpu: x86_64 00:02:05.630 Message: ## Building in Developer Mode ## 00:02:05.630 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.630 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.630 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.630 Program python3 found: YES (/usr/bin/python3) 00:02:05.630 Program cat found: YES (/usr/bin/cat) 00:02:05.630 Compiler for C supports arguments -march=native: YES 00:02:05.630 Checking for size of "void *" : 8 00:02:05.630 Checking for size of "void *" : 8 (cached) 00:02:05.630 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.630 Library m found: YES 00:02:05.630 Library numa found: YES 00:02:05.630 Has header "numaif.h" : YES 00:02:05.630 Library fdt found: NO 00:02:05.630 Library execinfo found: NO 00:02:05.630 Has header "execinfo.h" : YES 00:02:05.630 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.630 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.630 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.630 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.630 Run-time dependency openssl found: YES 3.1.1 00:02:05.630 Run-time dependency libpcap found: YES 1.10.4 00:02:05.630 Has header "pcap.h" with dependency libpcap: YES 00:02:05.630 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.630 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.630 Compiler for C supports arguments -Wformat: YES 00:02:05.630 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.630 Compiler for C supports arguments -Wformat-security: NO 00:02:05.630 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.630 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.630 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.630 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.630 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.630 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.630 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.630 Compiler for C supports arguments -Wundef: YES 00:02:05.630 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.630 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.630 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.630 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.630 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.630 Program objdump found: YES (/usr/bin/objdump) 00:02:05.630 Compiler for C supports arguments -mavx512f: YES 00:02:05.630 Checking if "AVX512 checking" compiles: YES 00:02:05.630 Fetching value of define "__SSE4_2__" : 1 00:02:05.630 Fetching value of define "__AES__" : 1 00:02:05.630 Fetching value of define "__AVX__" : 1 00:02:05.630 Fetching value of define "__AVX2__" : 1 00:02:05.630 Fetching value of define "__AVX512BW__" : 1 00:02:05.630 Fetching value of define "__AVX512CD__" : 1 00:02:05.630 Fetching value of define "__AVX512DQ__" : 1 00:02:05.630 Fetching value of define "__AVX512F__" : 1 00:02:05.630 Fetching value of define "__AVX512VL__" : 1 00:02:05.630 Fetching value of define "__PCLMUL__" : 1 00:02:05.630 Fetching value of define "__RDRND__" : 1 00:02:05.630 Fetching value of define "__RDSEED__" : 1 00:02:05.630 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:05.630 Fetching value of define "__znver1__" : (undefined) 00:02:05.630 Fetching value of define "__znver2__" : (undefined) 00:02:05.630 Fetching value of define "__znver3__" : (undefined) 00:02:05.630 Fetching value of define "__znver4__" : (undefined) 00:02:05.630 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.630 Message: lib/log: Defining dependency "log" 00:02:05.630 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.630 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.630 Checking for function "getentropy" : NO 00:02:05.630 Message: lib/eal: Defining dependency "eal" 00:02:05.630 Message: lib/ring: Defining dependency "ring" 00:02:05.630 Message: lib/rcu: Defining dependency "rcu" 00:02:05.630 Message: lib/mempool: Defining dependency "mempool" 00:02:05.631 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.631 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.631 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.631 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.631 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.631 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.631 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:05.631 Compiler for C supports arguments -mpclmul: YES 00:02:05.631 Compiler for C supports arguments -maes: YES 00:02:05.631 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.631 Compiler for C supports arguments -mavx512bw: YES 00:02:05.631 Compiler for C supports arguments -mavx512dq: YES 00:02:05.631 Compiler for C supports arguments -mavx512vl: YES 00:02:05.631 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.631 Compiler for C supports arguments -mavx2: YES 00:02:05.631 Compiler for C supports arguments -mavx: YES 00:02:05.631 Message: lib/net: Defining dependency "net" 00:02:05.631 Message: lib/meter: Defining dependency "meter" 00:02:05.631 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.631 Message: lib/pci: Defining dependency "pci" 00:02:05.631 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.631 Message: lib/hash: Defining dependency "hash" 00:02:05.631 Message: lib/timer: Defining dependency "timer" 00:02:05.631 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.631 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.631 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.631 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.631 Message: lib/power: Defining dependency "power" 00:02:05.631 Message: lib/reorder: Defining dependency "reorder" 00:02:05.631 Message: lib/security: Defining dependency "security" 00:02:05.631 Has header "linux/userfaultfd.h" : YES 00:02:05.631 Has header "linux/vduse.h" : YES 00:02:05.631 Message: lib/vhost: Defining dependency "vhost" 00:02:05.631 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.631 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.631 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.631 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.631 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.631 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.631 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.631 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.631 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.631 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.631 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.631 Configuring doxy-api-html.conf using configuration 00:02:05.631 Configuring doxy-api-man.conf using configuration 00:02:05.631 Program mandb found: YES (/usr/bin/mandb) 00:02:05.631 Program sphinx-build found: NO 00:02:05.631 Configuring rte_build_config.h using configuration 00:02:05.631 Message: 00:02:05.631 ================= 00:02:05.631 Applications Enabled 00:02:05.631 ================= 00:02:05.631 00:02:05.631 apps: 00:02:05.631 00:02:05.631 00:02:05.631 Message: 00:02:05.631 ================= 00:02:05.631 Libraries Enabled 00:02:05.631 ================= 00:02:05.631 00:02:05.631 libs: 00:02:05.631 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.631 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.631 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.631 00:02:05.631 Message: 00:02:05.631 =============== 00:02:05.631 Drivers Enabled 00:02:05.631 =============== 00:02:05.631 00:02:05.631 common: 00:02:05.631 00:02:05.631 bus: 00:02:05.631 pci, vdev, 00:02:05.631 mempool: 00:02:05.631 ring, 00:02:05.631 dma: 00:02:05.631 00:02:05.631 net: 00:02:05.631 00:02:05.631 crypto: 00:02:05.631 00:02:05.631 compress: 00:02:05.631 00:02:05.631 vdpa: 00:02:05.631 00:02:05.631 00:02:05.631 Message: 00:02:05.631 ================= 00:02:05.631 Content Skipped 00:02:05.631 ================= 00:02:05.631 00:02:05.631 apps: 00:02:05.631 dumpcap: explicitly disabled via build config 00:02:05.631 graph: explicitly disabled via build config 00:02:05.631 pdump: explicitly disabled via build config 00:02:05.631 proc-info: explicitly disabled via build config 00:02:05.631 test-acl: explicitly disabled via build config 00:02:05.631 test-bbdev: explicitly disabled via build config 00:02:05.631 test-cmdline: explicitly disabled via build config 00:02:05.631 test-compress-perf: explicitly disabled via build config 00:02:05.631 test-crypto-perf: explicitly disabled via build config 00:02:05.631 test-dma-perf: explicitly disabled via build config 00:02:05.631 test-eventdev: explicitly disabled via build config 00:02:05.631 test-fib: explicitly disabled via build config 00:02:05.631 test-flow-perf: explicitly disabled via build config 00:02:05.631 test-gpudev: explicitly disabled via build config 00:02:05.631 test-mldev: explicitly disabled via build config 00:02:05.631 test-pipeline: explicitly disabled via build config 00:02:05.631 test-pmd: explicitly disabled via build config 00:02:05.631 test-regex: explicitly disabled via build config 00:02:05.631 test-sad: explicitly disabled via build config 00:02:05.631 test-security-perf: explicitly disabled via build config 00:02:05.631 00:02:05.631 libs: 00:02:05.631 argparse: explicitly disabled via build config 00:02:05.631 metrics: explicitly disabled via build config 00:02:05.631 acl: explicitly disabled via build config 00:02:05.631 bbdev: explicitly disabled via build config 00:02:05.631 bitratestats: explicitly disabled via build config 00:02:05.631 bpf: explicitly disabled via build config 00:02:05.631 cfgfile: explicitly disabled via build config 00:02:05.631 distributor: explicitly disabled via build config 00:02:05.631 efd: explicitly disabled via build config 00:02:05.631 eventdev: explicitly disabled via build config 00:02:05.631 dispatcher: explicitly disabled via build config 00:02:05.631 gpudev: explicitly disabled via build config 00:02:05.631 gro: explicitly disabled via build config 00:02:05.631 gso: explicitly disabled via build config 00:02:05.631 ip_frag: explicitly disabled via build config 00:02:05.631 jobstats: explicitly disabled via build config 00:02:05.631 latencystats: explicitly disabled via build config 00:02:05.631 lpm: explicitly disabled via build config 00:02:05.631 member: explicitly disabled via build config 00:02:05.631 pcapng: explicitly disabled via build config 00:02:05.631 rawdev: explicitly disabled via build config 00:02:05.631 regexdev: explicitly disabled via build config 00:02:05.631 mldev: explicitly disabled via build config 00:02:05.631 rib: explicitly disabled via build config 00:02:05.631 sched: explicitly disabled via build config 00:02:05.631 stack: explicitly disabled via build config 00:02:05.631 ipsec: explicitly disabled via build config 00:02:05.631 pdcp: explicitly disabled via build config 00:02:05.631 fib: explicitly disabled via build config 00:02:05.631 port: explicitly disabled via build config 00:02:05.631 pdump: explicitly disabled via build config 00:02:05.631 table: explicitly disabled via build config 00:02:05.631 pipeline: explicitly disabled via build config 00:02:05.631 graph: explicitly disabled via build config 00:02:05.631 node: explicitly disabled via build config 00:02:05.631 00:02:05.631 drivers: 00:02:05.631 common/cpt: not in enabled drivers build config 00:02:05.631 common/dpaax: not in enabled drivers build config 00:02:05.631 common/iavf: not in enabled drivers build config 00:02:05.631 common/idpf: not in enabled drivers build config 00:02:05.631 common/ionic: not in enabled drivers build config 00:02:05.631 common/mvep: not in enabled drivers build config 00:02:05.631 common/octeontx: not in enabled drivers build config 00:02:05.631 bus/auxiliary: not in enabled drivers build config 00:02:05.631 bus/cdx: not in enabled drivers build config 00:02:05.631 bus/dpaa: not in enabled drivers build config 00:02:05.631 bus/fslmc: not in enabled drivers build config 00:02:05.631 bus/ifpga: not in enabled drivers build config 00:02:05.631 bus/platform: not in enabled drivers build config 00:02:05.631 bus/uacce: not in enabled drivers build config 00:02:05.631 bus/vmbus: not in enabled drivers build config 00:02:05.631 common/cnxk: not in enabled drivers build config 00:02:05.631 common/mlx5: not in enabled drivers build config 00:02:05.631 common/nfp: not in enabled drivers build config 00:02:05.631 common/nitrox: not in enabled drivers build config 00:02:05.631 common/qat: not in enabled drivers build config 00:02:05.631 common/sfc_efx: not in enabled drivers build config 00:02:05.631 mempool/bucket: not in enabled drivers build config 00:02:05.631 mempool/cnxk: not in enabled drivers build config 00:02:05.631 mempool/dpaa: not in enabled drivers build config 00:02:05.631 mempool/dpaa2: not in enabled drivers build config 00:02:05.631 mempool/octeontx: not in enabled drivers build config 00:02:05.631 mempool/stack: not in enabled drivers build config 00:02:05.631 dma/cnxk: not in enabled drivers build config 00:02:05.632 dma/dpaa: not in enabled drivers build config 00:02:05.632 dma/dpaa2: not in enabled drivers build config 00:02:05.632 dma/hisilicon: not in enabled drivers build config 00:02:05.632 dma/idxd: not in enabled drivers build config 00:02:05.632 dma/ioat: not in enabled drivers build config 00:02:05.632 dma/skeleton: not in enabled drivers build config 00:02:05.632 net/af_packet: not in enabled drivers build config 00:02:05.632 net/af_xdp: not in enabled drivers build config 00:02:05.632 net/ark: not in enabled drivers build config 00:02:05.632 net/atlantic: not in enabled drivers build config 00:02:05.632 net/avp: not in enabled drivers build config 00:02:05.632 net/axgbe: not in enabled drivers build config 00:02:05.632 net/bnx2x: not in enabled drivers build config 00:02:05.632 net/bnxt: not in enabled drivers build config 00:02:05.632 net/bonding: not in enabled drivers build config 00:02:05.632 net/cnxk: not in enabled drivers build config 00:02:05.632 net/cpfl: not in enabled drivers build config 00:02:05.632 net/cxgbe: not in enabled drivers build config 00:02:05.632 net/dpaa: not in enabled drivers build config 00:02:05.632 net/dpaa2: not in enabled drivers build config 00:02:05.632 net/e1000: not in enabled drivers build config 00:02:05.632 net/ena: not in enabled drivers build config 00:02:05.632 net/enetc: not in enabled drivers build config 00:02:05.632 net/enetfec: not in enabled drivers build config 00:02:05.632 net/enic: not in enabled drivers build config 00:02:05.632 net/failsafe: not in enabled drivers build config 00:02:05.632 net/fm10k: not in enabled drivers build config 00:02:05.632 net/gve: not in enabled drivers build config 00:02:05.632 net/hinic: not in enabled drivers build config 00:02:05.632 net/hns3: not in enabled drivers build config 00:02:05.632 net/i40e: not in enabled drivers build config 00:02:05.632 net/iavf: not in enabled drivers build config 00:02:05.632 net/ice: not in enabled drivers build config 00:02:05.632 net/idpf: not in enabled drivers build config 00:02:05.632 net/igc: not in enabled drivers build config 00:02:05.632 net/ionic: not in enabled drivers build config 00:02:05.632 net/ipn3ke: not in enabled drivers build config 00:02:05.632 net/ixgbe: not in enabled drivers build config 00:02:05.632 net/mana: not in enabled drivers build config 00:02:05.632 net/memif: not in enabled drivers build config 00:02:05.632 net/mlx4: not in enabled drivers build config 00:02:05.632 net/mlx5: not in enabled drivers build config 00:02:05.632 net/mvneta: not in enabled drivers build config 00:02:05.632 net/mvpp2: not in enabled drivers build config 00:02:05.632 net/netvsc: not in enabled drivers build config 00:02:05.632 net/nfb: not in enabled drivers build config 00:02:05.632 net/nfp: not in enabled drivers build config 00:02:05.632 net/ngbe: not in enabled drivers build config 00:02:05.632 net/null: not in enabled drivers build config 00:02:05.632 net/octeontx: not in enabled drivers build config 00:02:05.632 net/octeon_ep: not in enabled drivers build config 00:02:05.632 net/pcap: not in enabled drivers build config 00:02:05.632 net/pfe: not in enabled drivers build config 00:02:05.632 net/qede: not in enabled drivers build config 00:02:05.632 net/ring: not in enabled drivers build config 00:02:05.632 net/sfc: not in enabled drivers build config 00:02:05.632 net/softnic: not in enabled drivers build config 00:02:05.632 net/tap: not in enabled drivers build config 00:02:05.632 net/thunderx: not in enabled drivers build config 00:02:05.632 net/txgbe: not in enabled drivers build config 00:02:05.632 net/vdev_netvsc: not in enabled drivers build config 00:02:05.632 net/vhost: not in enabled drivers build config 00:02:05.632 net/virtio: not in enabled drivers build config 00:02:05.632 net/vmxnet3: not in enabled drivers build config 00:02:05.632 raw/*: missing internal dependency, "rawdev" 00:02:05.632 crypto/armv8: not in enabled drivers build config 00:02:05.632 crypto/bcmfs: not in enabled drivers build config 00:02:05.632 crypto/caam_jr: not in enabled drivers build config 00:02:05.632 crypto/ccp: not in enabled drivers build config 00:02:05.632 crypto/cnxk: not in enabled drivers build config 00:02:05.632 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.632 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.632 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.632 crypto/mlx5: not in enabled drivers build config 00:02:05.632 crypto/mvsam: not in enabled drivers build config 00:02:05.632 crypto/nitrox: not in enabled drivers build config 00:02:05.632 crypto/null: not in enabled drivers build config 00:02:05.632 crypto/octeontx: not in enabled drivers build config 00:02:05.632 crypto/openssl: not in enabled drivers build config 00:02:05.632 crypto/scheduler: not in enabled drivers build config 00:02:05.632 crypto/uadk: not in enabled drivers build config 00:02:05.632 crypto/virtio: not in enabled drivers build config 00:02:05.632 compress/isal: not in enabled drivers build config 00:02:05.632 compress/mlx5: not in enabled drivers build config 00:02:05.632 compress/nitrox: not in enabled drivers build config 00:02:05.632 compress/octeontx: not in enabled drivers build config 00:02:05.632 compress/zlib: not in enabled drivers build config 00:02:05.632 regex/*: missing internal dependency, "regexdev" 00:02:05.632 ml/*: missing internal dependency, "mldev" 00:02:05.632 vdpa/ifc: not in enabled drivers build config 00:02:05.632 vdpa/mlx5: not in enabled drivers build config 00:02:05.632 vdpa/nfp: not in enabled drivers build config 00:02:05.632 vdpa/sfc: not in enabled drivers build config 00:02:05.632 event/*: missing internal dependency, "eventdev" 00:02:05.632 baseband/*: missing internal dependency, "bbdev" 00:02:05.632 gpu/*: missing internal dependency, "gpudev" 00:02:05.632 00:02:05.632 00:02:05.632 Build targets in project: 84 00:02:05.632 00:02:05.632 DPDK 24.03.0 00:02:05.632 00:02:05.632 User defined options 00:02:05.632 buildtype : debug 00:02:05.632 default_library : shared 00:02:05.632 libdir : lib 00:02:05.632 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.632 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.632 c_link_args : 00:02:05.632 cpu_instruction_set: native 00:02:05.632 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.632 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.632 enable_docs : false 00:02:05.632 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:05.632 enable_kmods : false 00:02:05.632 max_lcores : 128 00:02:05.632 tests : false 00:02:05.632 00:02:05.632 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.632 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:05.632 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.632 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.632 [3/267] Linking static target lib/librte_kvargs.a 00:02:05.632 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.632 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.632 [6/267] Linking static target lib/librte_log.a 00:02:05.632 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.632 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.632 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.632 [10/267] Linking static target lib/librte_telemetry.a 00:02:05.632 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.632 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.632 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.632 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.632 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.632 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.632 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.894 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.155 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.155 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.155 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:06.155 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.155 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.416 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.416 [25/267] Linking target lib/librte_log.so.24.1 00:02:06.416 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.416 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.416 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.416 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.416 [30/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.416 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.677 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.677 [33/267] Linking target lib/librte_kvargs.so.24.1 00:02:06.677 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.677 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.939 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.939 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:06.939 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.939 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.939 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.939 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.939 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.939 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.939 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.200 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:07.200 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.200 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:07.200 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.200 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.200 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.461 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.461 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.461 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.461 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.723 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.723 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.723 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.723 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.723 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.723 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.982 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.982 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.983 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.983 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.245 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.245 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.245 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.245 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.506 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.506 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.506 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.506 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.506 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.506 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.506 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.766 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.766 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.766 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.766 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.766 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.766 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.027 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.027 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.028 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.028 [85/267] Linking static target lib/librte_eal.a 00:02:09.289 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.289 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.289 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.289 [89/267] Linking static target lib/librte_rcu.a 00:02:09.289 [90/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.289 [91/267] Linking static target lib/librte_ring.a 00:02:09.289 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.289 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.553 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.553 [95/267] Linking static target lib/librte_mempool.a 00:02:09.553 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.553 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.819 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.819 [99/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.819 [100/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.819 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:09.819 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.082 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.082 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.082 [105/267] Linking static target lib/librte_mbuf.a 00:02:10.082 [106/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.082 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:10.082 [108/267] Linking static target lib/librte_net.a 00:02:10.343 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.343 [110/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.343 [111/267] Linking static target lib/librte_meter.a 00:02:10.343 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.608 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.608 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.608 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.608 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.608 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.875 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:10.875 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.138 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.138 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.138 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.400 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.400 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:11.400 [125/267] Linking static target lib/librte_pci.a 00:02:11.400 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.660 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:11.660 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.660 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.660 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:11.660 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:11.660 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.660 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.660 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:11.660 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:11.921 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:11.921 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.921 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.921 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:11.921 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.921 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.921 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:11.921 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.921 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.921 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.921 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.183 [147/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.183 [148/267] Linking static target lib/librte_cmdline.a 00:02:12.183 [149/267] Linking static target lib/librte_ethdev.a 00:02:12.183 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.445 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.445 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.445 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.445 [154/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.445 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.445 [156/267] Linking static target lib/librte_timer.a 00:02:12.733 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.733 [158/267] Linking static target lib/librte_hash.a 00:02:12.733 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.733 [160/267] Linking static target lib/librte_compressdev.a 00:02:12.733 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.733 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.994 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:12.994 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.273 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.273 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.273 [167/267] Linking static target lib/librte_dmadev.a 00:02:13.273 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.273 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.273 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.273 [171/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.273 [172/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.273 [173/267] Linking static target lib/librte_cryptodev.a 00:02:13.536 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.536 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.798 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.798 [177/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.798 [178/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.798 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.059 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.059 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.059 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.059 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.321 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.321 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.321 [186/267] Linking static target lib/librte_power.a 00:02:14.321 [187/267] Linking static target lib/librte_reorder.a 00:02:14.321 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.582 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.582 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.582 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.582 [192/267] Linking static target lib/librte_security.a 00:02:14.851 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.851 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.121 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.382 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.382 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.382 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.382 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.382 [200/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.643 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.906 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.906 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.906 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.906 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.906 [206/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.906 [207/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.906 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.906 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.167 [210/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.167 [211/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.427 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.427 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:16.427 [214/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.427 [215/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.427 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.427 [217/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.427 [218/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.427 [219/267] Linking static target drivers/librte_bus_pci.a 00:02:16.427 [220/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.427 [221/267] Linking static target drivers/librte_bus_vdev.a 00:02:16.427 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.688 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.688 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.688 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:16.688 [226/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.950 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.522 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.522 [229/267] Linking static target lib/librte_vhost.a 00:02:18.907 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.907 [231/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.907 [232/267] Linking target lib/librte_eal.so.24.1 00:02:19.167 [233/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:19.167 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:19.167 [235/267] Linking target lib/librte_timer.so.24.1 00:02:19.167 [236/267] Linking target lib/librte_pci.so.24.1 00:02:19.167 [237/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:19.167 [238/267] Linking target lib/librte_ring.so.24.1 00:02:19.167 [239/267] Linking target lib/librte_meter.so.24.1 00:02:19.167 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:19.167 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:19.167 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:19.167 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:19.167 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.428 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.428 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:19.428 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:19.428 [248/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.428 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.428 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:19.428 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:19.428 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:19.690 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:19.690 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:19.690 [255/267] Linking target lib/librte_cryptodev.so.24.1 00:02:19.690 [256/267] Linking target lib/librte_net.so.24.1 00:02:19.690 [257/267] Linking target lib/librte_compressdev.so.24.1 00:02:19.950 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:19.950 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:19.950 [260/267] Linking target lib/librte_security.so.24.1 00:02:19.950 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:19.950 [262/267] Linking target lib/librte_hash.so.24.1 00:02:19.950 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:19.950 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:20.211 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:20.211 [266/267] Linking target lib/librte_power.so.24.1 00:02:20.211 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:20.211 INFO: autodetecting backend as ninja 00:02:20.211 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:46.894 CC lib/ut_mock/mock.o 00:02:46.894 CC lib/log/log_flags.o 00:02:46.894 CC lib/log/log.o 00:02:46.894 CC lib/log/log_deprecated.o 00:02:46.894 CC lib/ut/ut.o 00:02:46.894 LIB libspdk_ut_mock.a 00:02:46.894 LIB libspdk_ut.a 00:02:46.894 LIB libspdk_log.a 00:02:46.894 SO libspdk_ut_mock.so.6.0 00:02:46.894 SO libspdk_ut.so.2.0 00:02:46.894 SO libspdk_log.so.7.1 00:02:46.894 SYMLINK libspdk_ut_mock.so 00:02:46.894 SYMLINK libspdk_ut.so 00:02:46.894 SYMLINK libspdk_log.so 00:02:46.894 CC lib/ioat/ioat.o 00:02:46.894 CC lib/util/bit_array.o 00:02:46.894 CXX lib/trace_parser/trace.o 00:02:46.894 CC lib/util/base64.o 00:02:46.894 CC lib/util/cpuset.o 00:02:46.894 CC lib/util/crc16.o 00:02:46.894 CC lib/util/crc32.o 00:02:46.894 CC lib/dma/dma.o 00:02:46.894 CC lib/util/crc32c.o 00:02:46.894 CC lib/vfio_user/host/vfio_user_pci.o 00:02:46.894 CC lib/util/crc32_ieee.o 00:02:46.894 CC lib/util/crc64.o 00:02:46.894 CC lib/util/dif.o 00:02:46.894 CC lib/util/fd.o 00:02:46.894 CC lib/util/fd_group.o 00:02:46.894 CC lib/vfio_user/host/vfio_user.o 00:02:46.894 LIB libspdk_dma.a 00:02:46.894 SO libspdk_dma.so.5.0 00:02:46.894 CC lib/util/file.o 00:02:46.894 CC lib/util/hexlify.o 00:02:46.894 SYMLINK libspdk_dma.so 00:02:46.894 CC lib/util/iov.o 00:02:46.894 CC lib/util/math.o 00:02:46.894 CC lib/util/net.o 00:02:46.894 LIB libspdk_ioat.a 00:02:46.894 SO libspdk_ioat.so.7.0 00:02:46.894 CC lib/util/pipe.o 00:02:46.894 CC lib/util/strerror_tls.o 00:02:46.894 CC lib/util/string.o 00:02:46.894 LIB libspdk_vfio_user.a 00:02:46.894 CC lib/util/uuid.o 00:02:46.894 CC lib/util/xor.o 00:02:46.894 SO libspdk_vfio_user.so.5.0 00:02:46.894 CC lib/util/zipf.o 00:02:46.894 SYMLINK libspdk_ioat.so 00:02:46.894 CC lib/util/md5.o 00:02:46.894 SYMLINK libspdk_vfio_user.so 00:02:46.894 LIB libspdk_util.a 00:02:46.894 SO libspdk_util.so.10.1 00:02:47.156 SYMLINK libspdk_util.so 00:02:47.156 LIB libspdk_trace_parser.a 00:02:47.156 SO libspdk_trace_parser.so.6.0 00:02:47.156 CC lib/rdma_utils/rdma_utils.o 00:02:47.156 CC lib/env_dpdk/env.o 00:02:47.156 CC lib/vmd/vmd.o 00:02:47.156 CC lib/env_dpdk/memory.o 00:02:47.156 CC lib/idxd/idxd.o 00:02:47.156 CC lib/env_dpdk/pci.o 00:02:47.156 CC lib/vmd/led.o 00:02:47.156 CC lib/conf/conf.o 00:02:47.156 CC lib/json/json_parse.o 00:02:47.156 SYMLINK libspdk_trace_parser.so 00:02:47.156 CC lib/json/json_util.o 00:02:47.416 LIB libspdk_conf.a 00:02:47.416 CC lib/json/json_write.o 00:02:47.677 SO libspdk_conf.so.6.0 00:02:47.677 CC lib/env_dpdk/init.o 00:02:47.677 CC lib/env_dpdk/threads.o 00:02:47.677 SYMLINK libspdk_conf.so 00:02:47.677 CC lib/env_dpdk/pci_ioat.o 00:02:47.677 LIB libspdk_rdma_utils.a 00:02:47.677 SO libspdk_rdma_utils.so.1.0 00:02:47.677 CC lib/env_dpdk/pci_virtio.o 00:02:47.677 CC lib/idxd/idxd_user.o 00:02:47.677 CC lib/idxd/idxd_kernel.o 00:02:47.938 SYMLINK libspdk_rdma_utils.so 00:02:47.938 CC lib/env_dpdk/pci_vmd.o 00:02:47.938 LIB libspdk_vmd.a 00:02:47.938 CC lib/env_dpdk/pci_idxd.o 00:02:47.938 SO libspdk_vmd.so.6.0 00:02:47.938 CC lib/env_dpdk/pci_event.o 00:02:47.938 LIB libspdk_json.a 00:02:47.938 SYMLINK libspdk_vmd.so 00:02:47.938 CC lib/env_dpdk/sigbus_handler.o 00:02:47.938 CC lib/env_dpdk/pci_dpdk.o 00:02:47.938 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.938 SO libspdk_json.so.6.0 00:02:47.938 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:47.938 SYMLINK libspdk_json.so 00:02:48.201 CC lib/rdma_provider/common.o 00:02:48.201 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.201 LIB libspdk_idxd.a 00:02:48.201 SO libspdk_idxd.so.12.1 00:02:48.201 CC lib/jsonrpc/jsonrpc_server.o 00:02:48.201 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:48.201 CC lib/jsonrpc/jsonrpc_client.o 00:02:48.201 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:48.201 SYMLINK libspdk_idxd.so 00:02:48.463 LIB libspdk_rdma_provider.a 00:02:48.463 SO libspdk_rdma_provider.so.7.0 00:02:48.463 SYMLINK libspdk_rdma_provider.so 00:02:48.463 LIB libspdk_env_dpdk.a 00:02:48.463 LIB libspdk_jsonrpc.a 00:02:48.725 SO libspdk_env_dpdk.so.15.1 00:02:48.725 SO libspdk_jsonrpc.so.6.0 00:02:48.725 SYMLINK libspdk_jsonrpc.so 00:02:48.725 SYMLINK libspdk_env_dpdk.so 00:02:48.986 CC lib/rpc/rpc.o 00:02:49.247 LIB libspdk_rpc.a 00:02:49.247 SO libspdk_rpc.so.6.0 00:02:49.247 SYMLINK libspdk_rpc.so 00:02:49.509 CC lib/notify/notify.o 00:02:49.509 CC lib/notify/notify_rpc.o 00:02:49.509 CC lib/trace/trace.o 00:02:49.509 CC lib/trace/trace_flags.o 00:02:49.509 CC lib/trace/trace_rpc.o 00:02:49.509 CC lib/keyring/keyring.o 00:02:49.509 CC lib/keyring/keyring_rpc.o 00:02:49.770 LIB libspdk_notify.a 00:02:49.770 SO libspdk_notify.so.6.0 00:02:49.770 SYMLINK libspdk_notify.so 00:02:49.770 LIB libspdk_keyring.a 00:02:49.770 LIB libspdk_trace.a 00:02:49.770 SO libspdk_keyring.so.2.0 00:02:49.770 SO libspdk_trace.so.11.0 00:02:50.031 SYMLINK libspdk_keyring.so 00:02:50.031 SYMLINK libspdk_trace.so 00:02:50.292 CC lib/sock/sock.o 00:02:50.292 CC lib/sock/sock_rpc.o 00:02:50.292 CC lib/thread/thread.o 00:02:50.292 CC lib/thread/iobuf.o 00:02:50.554 LIB libspdk_sock.a 00:02:50.554 SO libspdk_sock.so.10.0 00:02:50.554 SYMLINK libspdk_sock.so 00:02:50.816 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:50.816 CC lib/nvme/nvme_ctrlr.o 00:02:50.816 CC lib/nvme/nvme_fabric.o 00:02:50.816 CC lib/nvme/nvme_ns_cmd.o 00:02:50.816 CC lib/nvme/nvme_ns.o 00:02:50.816 CC lib/nvme/nvme_pcie.o 00:02:50.816 CC lib/nvme/nvme_pcie_common.o 00:02:50.816 CC lib/nvme/nvme.o 00:02:50.816 CC lib/nvme/nvme_qpair.o 00:02:51.763 CC lib/nvme/nvme_quirks.o 00:02:51.763 CC lib/nvme/nvme_transport.o 00:02:51.763 CC lib/nvme/nvme_discovery.o 00:02:51.763 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.763 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.763 CC lib/nvme/nvme_tcp.o 00:02:52.025 CC lib/nvme/nvme_opal.o 00:02:52.025 CC lib/nvme/nvme_io_msg.o 00:02:52.025 LIB libspdk_thread.a 00:02:52.025 SO libspdk_thread.so.11.0 00:02:52.287 SYMLINK libspdk_thread.so 00:02:52.287 CC lib/nvme/nvme_poll_group.o 00:02:52.287 CC lib/nvme/nvme_zns.o 00:02:52.287 CC lib/nvme/nvme_stubs.o 00:02:52.287 CC lib/nvme/nvme_auth.o 00:02:52.547 CC lib/blob/blobstore.o 00:02:52.547 CC lib/accel/accel.o 00:02:52.809 CC lib/init/json_config.o 00:02:52.809 CC lib/init/subsystem.o 00:02:52.809 CC lib/virtio/virtio.o 00:02:53.070 CC lib/init/subsystem_rpc.o 00:02:53.070 CC lib/fsdev/fsdev.o 00:02:53.070 CC lib/init/rpc.o 00:02:53.070 CC lib/fsdev/fsdev_io.o 00:02:53.070 CC lib/fsdev/fsdev_rpc.o 00:02:53.331 CC lib/blob/request.o 00:02:53.331 LIB libspdk_init.a 00:02:53.331 CC lib/blob/zeroes.o 00:02:53.331 SO libspdk_init.so.6.0 00:02:53.331 CC lib/virtio/virtio_vhost_user.o 00:02:53.331 CC lib/accel/accel_rpc.o 00:02:53.331 SYMLINK libspdk_init.so 00:02:53.331 CC lib/accel/accel_sw.o 00:02:53.592 CC lib/blob/blob_bs_dev.o 00:02:53.592 CC lib/nvme/nvme_cuse.o 00:02:53.593 CC lib/event/app.o 00:02:53.593 CC lib/virtio/virtio_vfio_user.o 00:02:53.593 CC lib/virtio/virtio_pci.o 00:02:53.593 CC lib/nvme/nvme_rdma.o 00:02:53.593 CC lib/event/reactor.o 00:02:53.593 LIB libspdk_accel.a 00:02:53.855 SO libspdk_accel.so.16.0 00:02:53.855 SYMLINK libspdk_accel.so 00:02:53.855 CC lib/event/log_rpc.o 00:02:53.855 CC lib/event/app_rpc.o 00:02:53.855 LIB libspdk_virtio.a 00:02:53.855 LIB libspdk_fsdev.a 00:02:53.855 SO libspdk_fsdev.so.2.0 00:02:53.855 SO libspdk_virtio.so.7.0 00:02:53.855 CC lib/event/scheduler_static.o 00:02:53.855 SYMLINK libspdk_fsdev.so 00:02:53.855 CC lib/bdev/bdev.o 00:02:53.855 CC lib/bdev/bdev_rpc.o 00:02:54.116 CC lib/bdev/bdev_zone.o 00:02:54.116 SYMLINK libspdk_virtio.so 00:02:54.116 CC lib/bdev/part.o 00:02:54.116 CC lib/bdev/scsi_nvme.o 00:02:54.116 LIB libspdk_event.a 00:02:54.116 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:54.116 SO libspdk_event.so.14.0 00:02:54.116 SYMLINK libspdk_event.so 00:02:54.689 LIB libspdk_fuse_dispatcher.a 00:02:54.689 SO libspdk_fuse_dispatcher.so.1.0 00:02:54.689 SYMLINK libspdk_fuse_dispatcher.so 00:02:54.950 LIB libspdk_nvme.a 00:02:54.950 SO libspdk_nvme.so.15.0 00:02:55.212 SYMLINK libspdk_nvme.so 00:02:55.539 LIB libspdk_blob.a 00:02:55.539 SO libspdk_blob.so.12.0 00:02:55.539 SYMLINK libspdk_blob.so 00:02:55.804 CC lib/blobfs/tree.o 00:02:55.804 CC lib/blobfs/blobfs.o 00:02:55.804 CC lib/lvol/lvol.o 00:02:56.065 LIB libspdk_bdev.a 00:02:56.328 SO libspdk_bdev.so.17.0 00:02:56.328 SYMLINK libspdk_bdev.so 00:02:56.588 LIB libspdk_blobfs.a 00:02:56.588 SO libspdk_blobfs.so.11.0 00:02:56.588 CC lib/ftl/ftl_core.o 00:02:56.588 CC lib/ftl/ftl_layout.o 00:02:56.588 CC lib/ftl/ftl_init.o 00:02:56.588 CC lib/ftl/ftl_debug.o 00:02:56.588 CC lib/nvmf/ctrlr.o 00:02:56.588 CC lib/nbd/nbd.o 00:02:56.588 CC lib/ublk/ublk.o 00:02:56.588 CC lib/scsi/dev.o 00:02:56.588 SYMLINK libspdk_blobfs.so 00:02:56.588 CC lib/scsi/lun.o 00:02:56.588 LIB libspdk_lvol.a 00:02:56.588 SO libspdk_lvol.so.11.0 00:02:56.588 SYMLINK libspdk_lvol.so 00:02:56.588 CC lib/nbd/nbd_rpc.o 00:02:56.588 CC lib/scsi/port.o 00:02:56.588 CC lib/scsi/scsi.o 00:02:56.850 CC lib/scsi/scsi_bdev.o 00:02:56.850 CC lib/scsi/scsi_pr.o 00:02:56.850 CC lib/scsi/scsi_rpc.o 00:02:56.850 CC lib/scsi/task.o 00:02:56.850 CC lib/ublk/ublk_rpc.o 00:02:56.850 CC lib/ftl/ftl_io.o 00:02:56.850 CC lib/nvmf/ctrlr_discovery.o 00:02:56.850 LIB libspdk_nbd.a 00:02:56.850 SO libspdk_nbd.so.7.0 00:02:56.850 SYMLINK libspdk_nbd.so 00:02:56.850 CC lib/nvmf/ctrlr_bdev.o 00:02:56.850 CC lib/ftl/ftl_sb.o 00:02:57.112 CC lib/ftl/ftl_l2p.o 00:02:57.112 CC lib/ftl/ftl_l2p_flat.o 00:02:57.112 LIB libspdk_ublk.a 00:02:57.112 CC lib/nvmf/subsystem.o 00:02:57.112 SO libspdk_ublk.so.3.0 00:02:57.112 SYMLINK libspdk_ublk.so 00:02:57.112 LIB libspdk_scsi.a 00:02:57.112 CC lib/nvmf/nvmf.o 00:02:57.112 CC lib/ftl/ftl_nv_cache.o 00:02:57.112 CC lib/ftl/ftl_band.o 00:02:57.112 CC lib/ftl/ftl_band_ops.o 00:02:57.112 SO libspdk_scsi.so.9.0 00:02:57.375 CC lib/ftl/ftl_writer.o 00:02:57.375 CC lib/nvmf/nvmf_rpc.o 00:02:57.375 SYMLINK libspdk_scsi.so 00:02:57.375 CC lib/nvmf/transport.o 00:02:57.375 CC lib/ftl/ftl_rq.o 00:02:57.637 CC lib/ftl/ftl_reloc.o 00:02:57.637 CC lib/ftl/ftl_l2p_cache.o 00:02:57.637 CC lib/nvmf/tcp.o 00:02:57.637 CC lib/iscsi/conn.o 00:02:57.637 CC lib/iscsi/init_grp.o 00:02:57.898 CC lib/ftl/ftl_p2l.o 00:02:57.898 CC lib/iscsi/iscsi.o 00:02:57.898 CC lib/nvmf/stubs.o 00:02:57.898 CC lib/ftl/ftl_p2l_log.o 00:02:57.898 CC lib/ftl/mngt/ftl_mngt.o 00:02:58.236 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:58.236 CC lib/vhost/vhost.o 00:02:58.236 CC lib/vhost/vhost_rpc.o 00:02:58.236 CC lib/vhost/vhost_scsi.o 00:02:58.236 CC lib/vhost/vhost_blk.o 00:02:58.236 CC lib/vhost/rte_vhost_user.o 00:02:58.236 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:58.236 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.236 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.533 CC lib/iscsi/param.o 00:02:58.533 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.533 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.533 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.793 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.793 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:58.793 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:58.793 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:58.793 CC lib/nvmf/mdns_server.o 00:02:58.793 CC lib/iscsi/portal_grp.o 00:02:58.793 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:58.793 CC lib/ftl/utils/ftl_conf.o 00:02:59.084 CC lib/ftl/utils/ftl_md.o 00:02:59.084 CC lib/ftl/utils/ftl_mempool.o 00:02:59.084 CC lib/ftl/utils/ftl_bitmap.o 00:02:59.084 CC lib/ftl/utils/ftl_property.o 00:02:59.084 CC lib/nvmf/rdma.o 00:02:59.084 LIB libspdk_vhost.a 00:02:59.084 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:59.084 CC lib/iscsi/tgt_node.o 00:02:59.084 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:59.084 SO libspdk_vhost.so.8.0 00:02:59.346 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:59.346 SYMLINK libspdk_vhost.so 00:02:59.346 CC lib/iscsi/iscsi_subsystem.o 00:02:59.346 CC lib/iscsi/iscsi_rpc.o 00:02:59.346 CC lib/iscsi/task.o 00:02:59.346 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:59.346 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:59.346 CC lib/nvmf/auth.o 00:02:59.346 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:59.346 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:59.608 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:59.608 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:59.608 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:59.608 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:59.608 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:59.608 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:59.608 CC lib/ftl/base/ftl_base_dev.o 00:02:59.608 CC lib/ftl/base/ftl_base_bdev.o 00:02:59.608 LIB libspdk_iscsi.a 00:02:59.608 CC lib/ftl/ftl_trace.o 00:02:59.608 SO libspdk_iscsi.so.8.0 00:02:59.871 SYMLINK libspdk_iscsi.so 00:02:59.871 LIB libspdk_ftl.a 00:03:00.131 SO libspdk_ftl.so.9.0 00:03:00.393 SYMLINK libspdk_ftl.so 00:03:00.963 LIB libspdk_nvmf.a 00:03:00.963 SO libspdk_nvmf.so.20.0 00:03:00.963 SYMLINK libspdk_nvmf.so 00:03:01.222 CC module/env_dpdk/env_dpdk_rpc.o 00:03:01.479 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:01.479 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:01.479 CC module/blob/bdev/blob_bdev.o 00:03:01.479 CC module/scheduler/gscheduler/gscheduler.o 00:03:01.479 CC module/keyring/file/keyring.o 00:03:01.479 CC module/accel/error/accel_error.o 00:03:01.479 CC module/fsdev/aio/fsdev_aio.o 00:03:01.479 CC module/keyring/linux/keyring.o 00:03:01.479 CC module/sock/posix/posix.o 00:03:01.479 LIB libspdk_env_dpdk_rpc.a 00:03:01.479 SO libspdk_env_dpdk_rpc.so.6.0 00:03:01.479 SYMLINK libspdk_env_dpdk_rpc.so 00:03:01.479 CC module/keyring/linux/keyring_rpc.o 00:03:01.479 CC module/keyring/file/keyring_rpc.o 00:03:01.479 CC module/accel/error/accel_error_rpc.o 00:03:01.479 LIB libspdk_scheduler_dynamic.a 00:03:01.479 LIB libspdk_scheduler_gscheduler.a 00:03:01.479 LIB libspdk_scheduler_dpdk_governor.a 00:03:01.479 LIB libspdk_blob_bdev.a 00:03:01.479 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:01.479 SO libspdk_scheduler_dynamic.so.4.0 00:03:01.479 SO libspdk_scheduler_gscheduler.so.4.0 00:03:01.479 SO libspdk_blob_bdev.so.12.0 00:03:01.737 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:01.737 SYMLINK libspdk_scheduler_gscheduler.so 00:03:01.737 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:01.737 LIB libspdk_keyring_file.a 00:03:01.737 LIB libspdk_keyring_linux.a 00:03:01.737 SYMLINK libspdk_scheduler_dynamic.so 00:03:01.737 SYMLINK libspdk_blob_bdev.so 00:03:01.737 LIB libspdk_accel_error.a 00:03:01.737 SO libspdk_keyring_file.so.2.0 00:03:01.737 SO libspdk_keyring_linux.so.1.0 00:03:01.737 SO libspdk_accel_error.so.2.0 00:03:01.737 SYMLINK libspdk_keyring_file.so 00:03:01.737 CC module/sock/uring/uring.o 00:03:01.737 SYMLINK libspdk_keyring_linux.so 00:03:01.737 CC module/fsdev/aio/linux_aio_mgr.o 00:03:01.737 SYMLINK libspdk_accel_error.so 00:03:01.737 CC module/accel/ioat/accel_ioat.o 00:03:01.737 CC module/accel/ioat/accel_ioat_rpc.o 00:03:01.737 CC module/accel/iaa/accel_iaa.o 00:03:01.737 CC module/accel/dsa/accel_dsa.o 00:03:01.995 LIB libspdk_sock_posix.a 00:03:01.995 LIB libspdk_fsdev_aio.a 00:03:01.995 SO libspdk_sock_posix.so.6.0 00:03:01.995 LIB libspdk_accel_ioat.a 00:03:01.995 CC module/bdev/delay/vbdev_delay.o 00:03:01.995 SO libspdk_fsdev_aio.so.1.0 00:03:01.995 CC module/accel/iaa/accel_iaa_rpc.o 00:03:01.995 CC module/blobfs/bdev/blobfs_bdev.o 00:03:01.995 SO libspdk_accel_ioat.so.6.0 00:03:01.995 SYMLINK libspdk_sock_posix.so 00:03:01.995 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:01.995 CC module/bdev/error/vbdev_error.o 00:03:01.995 SYMLINK libspdk_fsdev_aio.so 00:03:01.995 SYMLINK libspdk_accel_ioat.so 00:03:01.995 CC module/bdev/error/vbdev_error_rpc.o 00:03:01.995 CC module/accel/dsa/accel_dsa_rpc.o 00:03:01.995 CC module/bdev/gpt/gpt.o 00:03:01.995 LIB libspdk_accel_iaa.a 00:03:01.995 SO libspdk_accel_iaa.so.3.0 00:03:02.252 CC module/bdev/gpt/vbdev_gpt.o 00:03:02.252 SYMLINK libspdk_accel_iaa.so 00:03:02.252 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:02.252 LIB libspdk_blobfs_bdev.a 00:03:02.252 SO libspdk_blobfs_bdev.so.6.0 00:03:02.252 LIB libspdk_accel_dsa.a 00:03:02.252 LIB libspdk_sock_uring.a 00:03:02.252 SO libspdk_accel_dsa.so.5.0 00:03:02.252 SYMLINK libspdk_blobfs_bdev.so 00:03:02.252 SO libspdk_sock_uring.so.5.0 00:03:02.252 SYMLINK libspdk_accel_dsa.so 00:03:02.252 SYMLINK libspdk_sock_uring.so 00:03:02.252 LIB libspdk_bdev_error.a 00:03:02.252 SO libspdk_bdev_error.so.6.0 00:03:02.252 CC module/bdev/lvol/vbdev_lvol.o 00:03:02.252 LIB libspdk_bdev_delay.a 00:03:02.252 LIB libspdk_bdev_gpt.a 00:03:02.252 CC module/bdev/malloc/bdev_malloc.o 00:03:02.252 SO libspdk_bdev_gpt.so.6.0 00:03:02.252 SO libspdk_bdev_delay.so.6.0 00:03:02.252 CC module/bdev/null/bdev_null.o 00:03:02.252 CC module/bdev/nvme/bdev_nvme.o 00:03:02.252 SYMLINK libspdk_bdev_error.so 00:03:02.252 CC module/bdev/raid/bdev_raid.o 00:03:02.511 CC module/bdev/passthru/vbdev_passthru.o 00:03:02.511 CC module/bdev/split/vbdev_split.o 00:03:02.511 SYMLINK libspdk_bdev_delay.so 00:03:02.511 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:02.511 SYMLINK libspdk_bdev_gpt.so 00:03:02.511 CC module/bdev/raid/bdev_raid_rpc.o 00:03:02.511 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:02.511 CC module/bdev/split/vbdev_split_rpc.o 00:03:02.511 CC module/bdev/null/bdev_null_rpc.o 00:03:02.511 CC module/bdev/raid/bdev_raid_sb.o 00:03:02.511 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:02.511 LIB libspdk_bdev_malloc.a 00:03:02.511 SO libspdk_bdev_malloc.so.6.0 00:03:02.771 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:02.771 SYMLINK libspdk_bdev_malloc.so 00:03:02.771 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:02.771 CC module/bdev/nvme/nvme_rpc.o 00:03:02.771 LIB libspdk_bdev_null.a 00:03:02.771 LIB libspdk_bdev_zone_block.a 00:03:02.771 LIB libspdk_bdev_split.a 00:03:02.772 SO libspdk_bdev_null.so.6.0 00:03:02.772 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:02.772 SO libspdk_bdev_zone_block.so.6.0 00:03:02.772 CC module/bdev/nvme/bdev_mdns_client.o 00:03:02.772 SO libspdk_bdev_split.so.6.0 00:03:02.772 SYMLINK libspdk_bdev_null.so 00:03:02.772 LIB libspdk_bdev_passthru.a 00:03:02.772 SYMLINK libspdk_bdev_zone_block.so 00:03:02.772 CC module/bdev/nvme/vbdev_opal.o 00:03:02.772 SYMLINK libspdk_bdev_split.so 00:03:02.772 SO libspdk_bdev_passthru.so.6.0 00:03:02.772 SYMLINK libspdk_bdev_passthru.so 00:03:02.772 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:02.772 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:03.032 CC module/bdev/uring/bdev_uring.o 00:03:03.032 CC module/bdev/raid/raid0.o 00:03:03.032 CC module/bdev/aio/bdev_aio.o 00:03:03.032 CC module/bdev/raid/raid1.o 00:03:03.032 LIB libspdk_bdev_lvol.a 00:03:03.032 SO libspdk_bdev_lvol.so.6.0 00:03:03.032 CC module/bdev/uring/bdev_uring_rpc.o 00:03:03.032 SYMLINK libspdk_bdev_lvol.so 00:03:03.032 CC module/bdev/aio/bdev_aio_rpc.o 00:03:03.032 CC module/bdev/ftl/bdev_ftl.o 00:03:03.292 CC module/bdev/raid/concat.o 00:03:03.292 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:03.292 CC module/bdev/iscsi/bdev_iscsi.o 00:03:03.292 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:03.292 LIB libspdk_bdev_uring.a 00:03:03.292 SO libspdk_bdev_uring.so.6.0 00:03:03.292 LIB libspdk_bdev_aio.a 00:03:03.292 SO libspdk_bdev_aio.so.6.0 00:03:03.292 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:03.292 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:03.292 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:03.292 SYMLINK libspdk_bdev_uring.so 00:03:03.292 SYMLINK libspdk_bdev_aio.so 00:03:03.292 LIB libspdk_bdev_raid.a 00:03:03.292 LIB libspdk_bdev_ftl.a 00:03:03.552 SO libspdk_bdev_raid.so.6.0 00:03:03.552 SO libspdk_bdev_ftl.so.6.0 00:03:03.552 SYMLINK libspdk_bdev_raid.so 00:03:03.552 SYMLINK libspdk_bdev_ftl.so 00:03:03.552 LIB libspdk_bdev_iscsi.a 00:03:03.552 SO libspdk_bdev_iscsi.so.6.0 00:03:03.552 SYMLINK libspdk_bdev_iscsi.so 00:03:03.871 LIB libspdk_bdev_virtio.a 00:03:03.871 SO libspdk_bdev_virtio.so.6.0 00:03:03.871 SYMLINK libspdk_bdev_virtio.so 00:03:04.133 LIB libspdk_bdev_nvme.a 00:03:04.133 SO libspdk_bdev_nvme.so.7.1 00:03:04.133 SYMLINK libspdk_bdev_nvme.so 00:03:04.701 CC module/event/subsystems/scheduler/scheduler.o 00:03:04.701 CC module/event/subsystems/sock/sock.o 00:03:04.701 CC module/event/subsystems/vmd/vmd.o 00:03:04.701 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:04.701 CC module/event/subsystems/iobuf/iobuf.o 00:03:04.701 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:04.701 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:04.701 CC module/event/subsystems/fsdev/fsdev.o 00:03:04.701 CC module/event/subsystems/keyring/keyring.o 00:03:04.701 LIB libspdk_event_keyring.a 00:03:04.701 LIB libspdk_event_vhost_blk.a 00:03:04.701 SO libspdk_event_keyring.so.1.0 00:03:04.701 LIB libspdk_event_sock.a 00:03:04.701 LIB libspdk_event_fsdev.a 00:03:04.701 SO libspdk_event_vhost_blk.so.3.0 00:03:04.701 SO libspdk_event_sock.so.5.0 00:03:04.701 SO libspdk_event_fsdev.so.1.0 00:03:04.701 SYMLINK libspdk_event_keyring.so 00:03:04.701 LIB libspdk_event_vmd.a 00:03:04.701 LIB libspdk_event_iobuf.a 00:03:04.701 SYMLINK libspdk_event_sock.so 00:03:04.701 LIB libspdk_event_scheduler.a 00:03:04.701 SYMLINK libspdk_event_vhost_blk.so 00:03:04.701 SO libspdk_event_vmd.so.6.0 00:03:04.701 SYMLINK libspdk_event_fsdev.so 00:03:04.701 SO libspdk_event_iobuf.so.3.0 00:03:04.701 SO libspdk_event_scheduler.so.4.0 00:03:04.701 SYMLINK libspdk_event_vmd.so 00:03:04.701 SYMLINK libspdk_event_scheduler.so 00:03:04.701 SYMLINK libspdk_event_iobuf.so 00:03:04.959 CC module/event/subsystems/accel/accel.o 00:03:05.218 LIB libspdk_event_accel.a 00:03:05.218 SO libspdk_event_accel.so.6.0 00:03:05.218 SYMLINK libspdk_event_accel.so 00:03:05.480 CC module/event/subsystems/bdev/bdev.o 00:03:05.480 LIB libspdk_event_bdev.a 00:03:05.480 SO libspdk_event_bdev.so.6.0 00:03:05.739 SYMLINK libspdk_event_bdev.so 00:03:05.739 CC module/event/subsystems/nbd/nbd.o 00:03:05.739 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.739 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.739 CC module/event/subsystems/ublk/ublk.o 00:03:05.739 CC module/event/subsystems/scsi/scsi.o 00:03:05.999 LIB libspdk_event_nbd.a 00:03:05.999 LIB libspdk_event_ublk.a 00:03:05.999 LIB libspdk_event_scsi.a 00:03:05.999 SO libspdk_event_nbd.so.6.0 00:03:05.999 SO libspdk_event_ublk.so.3.0 00:03:05.999 SO libspdk_event_scsi.so.6.0 00:03:05.999 LIB libspdk_event_nvmf.a 00:03:05.999 SYMLINK libspdk_event_nbd.so 00:03:05.999 SYMLINK libspdk_event_scsi.so 00:03:05.999 SYMLINK libspdk_event_ublk.so 00:03:05.999 SO libspdk_event_nvmf.so.6.0 00:03:05.999 SYMLINK libspdk_event_nvmf.so 00:03:06.260 CC module/event/subsystems/iscsi/iscsi.o 00:03:06.260 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:06.260 LIB libspdk_event_vhost_scsi.a 00:03:06.260 LIB libspdk_event_iscsi.a 00:03:06.260 SO libspdk_event_iscsi.so.6.0 00:03:06.260 SO libspdk_event_vhost_scsi.so.3.0 00:03:06.260 SYMLINK libspdk_event_iscsi.so 00:03:06.260 SYMLINK libspdk_event_vhost_scsi.so 00:03:06.520 SO libspdk.so.6.0 00:03:06.521 SYMLINK libspdk.so 00:03:06.781 CXX app/trace/trace.o 00:03:06.781 CC test/rpc_client/rpc_client_test.o 00:03:06.781 TEST_HEADER include/spdk/accel.h 00:03:06.781 TEST_HEADER include/spdk/accel_module.h 00:03:06.781 TEST_HEADER include/spdk/assert.h 00:03:06.781 TEST_HEADER include/spdk/barrier.h 00:03:06.781 TEST_HEADER include/spdk/base64.h 00:03:06.781 TEST_HEADER include/spdk/bdev.h 00:03:06.781 TEST_HEADER include/spdk/bdev_module.h 00:03:06.781 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.781 TEST_HEADER include/spdk/bit_array.h 00:03:06.781 TEST_HEADER include/spdk/bit_pool.h 00:03:06.781 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.781 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.781 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.781 TEST_HEADER include/spdk/blobfs.h 00:03:06.781 TEST_HEADER include/spdk/blob.h 00:03:06.781 TEST_HEADER include/spdk/conf.h 00:03:06.782 TEST_HEADER include/spdk/config.h 00:03:06.782 TEST_HEADER include/spdk/cpuset.h 00:03:06.782 TEST_HEADER include/spdk/crc16.h 00:03:06.782 TEST_HEADER include/spdk/crc32.h 00:03:06.782 TEST_HEADER include/spdk/crc64.h 00:03:06.782 TEST_HEADER include/spdk/dif.h 00:03:06.782 TEST_HEADER include/spdk/dma.h 00:03:06.782 TEST_HEADER include/spdk/endian.h 00:03:06.782 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.782 TEST_HEADER include/spdk/env.h 00:03:06.782 TEST_HEADER include/spdk/event.h 00:03:06.782 TEST_HEADER include/spdk/fd_group.h 00:03:06.782 TEST_HEADER include/spdk/fd.h 00:03:06.782 TEST_HEADER include/spdk/file.h 00:03:06.782 TEST_HEADER include/spdk/fsdev.h 00:03:06.782 TEST_HEADER include/spdk/fsdev_module.h 00:03:06.782 TEST_HEADER include/spdk/ftl.h 00:03:06.782 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:06.782 CC test/thread/poller_perf/poller_perf.o 00:03:06.782 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.782 CC examples/ioat/perf/perf.o 00:03:06.782 TEST_HEADER include/spdk/hexlify.h 00:03:06.782 TEST_HEADER include/spdk/histogram_data.h 00:03:06.782 TEST_HEADER include/spdk/idxd.h 00:03:06.782 CC examples/util/zipf/zipf.o 00:03:06.782 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.782 TEST_HEADER include/spdk/init.h 00:03:06.782 TEST_HEADER include/spdk/ioat.h 00:03:06.782 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.782 CC test/dma/test_dma/test_dma.o 00:03:06.782 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.782 TEST_HEADER include/spdk/json.h 00:03:06.782 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.782 TEST_HEADER include/spdk/keyring.h 00:03:06.782 TEST_HEADER include/spdk/keyring_module.h 00:03:06.782 TEST_HEADER include/spdk/likely.h 00:03:06.782 TEST_HEADER include/spdk/log.h 00:03:06.782 TEST_HEADER include/spdk/lvol.h 00:03:06.782 TEST_HEADER include/spdk/md5.h 00:03:06.782 TEST_HEADER include/spdk/memory.h 00:03:06.782 TEST_HEADER include/spdk/mmio.h 00:03:06.782 TEST_HEADER include/spdk/nbd.h 00:03:06.782 TEST_HEADER include/spdk/net.h 00:03:06.782 TEST_HEADER include/spdk/notify.h 00:03:06.782 TEST_HEADER include/spdk/nvme.h 00:03:06.782 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.782 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.782 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.782 CC test/app/bdev_svc/bdev_svc.o 00:03:06.782 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.782 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.782 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.782 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.782 TEST_HEADER include/spdk/nvmf.h 00:03:06.782 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.782 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.782 TEST_HEADER include/spdk/opal.h 00:03:06.782 TEST_HEADER include/spdk/opal_spec.h 00:03:06.782 TEST_HEADER include/spdk/pci_ids.h 00:03:06.782 TEST_HEADER include/spdk/pipe.h 00:03:06.782 TEST_HEADER include/spdk/queue.h 00:03:06.782 CC test/env/mem_callbacks/mem_callbacks.o 00:03:06.782 TEST_HEADER include/spdk/reduce.h 00:03:06.782 TEST_HEADER include/spdk/rpc.h 00:03:06.782 TEST_HEADER include/spdk/scheduler.h 00:03:06.782 TEST_HEADER include/spdk/scsi.h 00:03:06.782 LINK rpc_client_test 00:03:06.782 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.782 TEST_HEADER include/spdk/sock.h 00:03:06.782 TEST_HEADER include/spdk/stdinc.h 00:03:06.782 TEST_HEADER include/spdk/string.h 00:03:06.782 TEST_HEADER include/spdk/thread.h 00:03:06.782 TEST_HEADER include/spdk/trace.h 00:03:06.782 TEST_HEADER include/spdk/trace_parser.h 00:03:06.782 TEST_HEADER include/spdk/tree.h 00:03:06.782 TEST_HEADER include/spdk/ublk.h 00:03:06.782 TEST_HEADER include/spdk/util.h 00:03:06.782 TEST_HEADER include/spdk/uuid.h 00:03:06.782 TEST_HEADER include/spdk/version.h 00:03:06.782 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.782 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.782 TEST_HEADER include/spdk/vhost.h 00:03:06.782 TEST_HEADER include/spdk/vmd.h 00:03:06.782 TEST_HEADER include/spdk/xor.h 00:03:06.782 LINK poller_perf 00:03:06.782 TEST_HEADER include/spdk/zipf.h 00:03:06.782 CXX test/cpp_headers/accel.o 00:03:06.782 LINK interrupt_tgt 00:03:06.782 LINK zipf 00:03:07.042 CXX test/cpp_headers/accel_module.o 00:03:07.042 LINK ioat_perf 00:03:07.042 LINK bdev_svc 00:03:07.042 LINK spdk_trace 00:03:07.043 CC test/app/histogram_perf/histogram_perf.o 00:03:07.043 CC test/app/jsoncat/jsoncat.o 00:03:07.043 CXX test/cpp_headers/assert.o 00:03:07.043 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.043 CC test/app/stub/stub.o 00:03:07.043 CC examples/ioat/verify/verify.o 00:03:07.043 CXX test/cpp_headers/barrier.o 00:03:07.043 LINK test_dma 00:03:07.304 LINK jsoncat 00:03:07.304 CC app/trace_record/trace_record.o 00:03:07.304 LINK histogram_perf 00:03:07.304 CXX test/cpp_headers/base64.o 00:03:07.304 LINK verify 00:03:07.304 LINK stub 00:03:07.304 CC app/nvmf_tgt/nvmf_main.o 00:03:07.304 LINK mem_callbacks 00:03:07.304 CXX test/cpp_headers/bdev.o 00:03:07.304 LINK nvme_fuzz 00:03:07.304 CXX test/cpp_headers/bdev_module.o 00:03:07.565 CC app/spdk_tgt/spdk_tgt.o 00:03:07.565 LINK spdk_trace_record 00:03:07.565 CC app/iscsi_tgt/iscsi_tgt.o 00:03:07.565 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:07.565 CXX test/cpp_headers/bdev_zone.o 00:03:07.565 CC test/env/vtophys/vtophys.o 00:03:07.565 CXX test/cpp_headers/bit_array.o 00:03:07.565 LINK nvmf_tgt 00:03:07.565 CC examples/thread/thread/thread_ex.o 00:03:07.565 CC app/spdk_lspci/spdk_lspci.o 00:03:07.565 CXX test/cpp_headers/bit_pool.o 00:03:07.565 LINK vtophys 00:03:07.565 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:07.565 LINK spdk_tgt 00:03:07.565 LINK iscsi_tgt 00:03:07.826 CXX test/cpp_headers/blob_bdev.o 00:03:07.826 LINK spdk_lspci 00:03:07.826 LINK env_dpdk_post_init 00:03:07.826 CC app/spdk_nvme_perf/perf.o 00:03:07.826 CC test/env/memory/memory_ut.o 00:03:07.826 LINK thread 00:03:07.826 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:07.826 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:07.826 CXX test/cpp_headers/blobfs_bdev.o 00:03:07.826 CC test/env/pci/pci_ut.o 00:03:08.085 CC examples/sock/hello_world/hello_sock.o 00:03:08.085 CC app/spdk_nvme_identify/identify.o 00:03:08.085 CC examples/vmd/lsvmd/lsvmd.o 00:03:08.085 CXX test/cpp_headers/blobfs.o 00:03:08.085 CC examples/idxd/perf/perf.o 00:03:08.085 LINK lsvmd 00:03:08.085 LINK pci_ut 00:03:08.085 LINK hello_sock 00:03:08.085 LINK vhost_fuzz 00:03:08.345 CXX test/cpp_headers/blob.o 00:03:08.345 CC examples/vmd/led/led.o 00:03:08.345 CXX test/cpp_headers/conf.o 00:03:08.345 LINK idxd_perf 00:03:08.345 CC app/spdk_nvme_discover/discovery_aer.o 00:03:08.345 CC test/event/event_perf/event_perf.o 00:03:08.603 LINK spdk_nvme_perf 00:03:08.603 CC test/nvme/aer/aer.o 00:03:08.603 LINK led 00:03:08.603 CXX test/cpp_headers/config.o 00:03:08.603 CXX test/cpp_headers/cpuset.o 00:03:08.603 LINK memory_ut 00:03:08.603 LINK event_perf 00:03:08.603 LINK spdk_nvme_identify 00:03:08.603 CC test/nvme/reset/reset.o 00:03:08.603 LINK spdk_nvme_discover 00:03:08.603 CXX test/cpp_headers/crc16.o 00:03:08.603 CXX test/cpp_headers/crc32.o 00:03:08.603 LINK aer 00:03:08.860 CC test/event/reactor/reactor.o 00:03:08.860 LINK reset 00:03:08.860 CC app/spdk_top/spdk_top.o 00:03:08.860 CC test/nvme/sgl/sgl.o 00:03:08.860 CXX test/cpp_headers/crc64.o 00:03:08.860 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:08.860 CC test/nvme/e2edp/nvme_dp.o 00:03:08.860 CXX test/cpp_headers/dif.o 00:03:08.860 CXX test/cpp_headers/dma.o 00:03:08.860 CC test/accel/dif/dif.o 00:03:08.860 LINK reactor 00:03:08.860 LINK iscsi_fuzz 00:03:08.860 CC test/event/reactor_perf/reactor_perf.o 00:03:09.120 CXX test/cpp_headers/endian.o 00:03:09.120 CXX test/cpp_headers/env_dpdk.o 00:03:09.120 LINK hello_fsdev 00:03:09.120 CXX test/cpp_headers/env.o 00:03:09.120 CC test/event/app_repeat/app_repeat.o 00:03:09.120 LINK reactor_perf 00:03:09.120 LINK nvme_dp 00:03:09.120 LINK sgl 00:03:09.120 CXX test/cpp_headers/event.o 00:03:09.120 CXX test/cpp_headers/fd_group.o 00:03:09.120 CXX test/cpp_headers/fd.o 00:03:09.120 LINK app_repeat 00:03:09.120 CC test/nvme/overhead/overhead.o 00:03:09.380 CC test/nvme/err_injection/err_injection.o 00:03:09.380 LINK dif 00:03:09.380 CC examples/accel/perf/accel_perf.o 00:03:09.380 CXX test/cpp_headers/file.o 00:03:09.380 CC examples/blob/hello_world/hello_blob.o 00:03:09.380 CC app/vhost/vhost.o 00:03:09.380 LINK err_injection 00:03:09.380 CC test/event/scheduler/scheduler.o 00:03:09.380 LINK overhead 00:03:09.380 CC test/blobfs/mkfs/mkfs.o 00:03:09.380 LINK spdk_top 00:03:09.640 CXX test/cpp_headers/fsdev.o 00:03:09.640 LINK hello_blob 00:03:09.640 LINK accel_perf 00:03:09.640 LINK vhost 00:03:09.640 LINK mkfs 00:03:09.640 CC test/nvme/reserve/reserve.o 00:03:09.640 LINK scheduler 00:03:09.640 CC test/nvme/startup/startup.o 00:03:09.640 CXX test/cpp_headers/fsdev_module.o 00:03:09.640 CC examples/nvme/hello_world/hello_world.o 00:03:09.640 CC test/lvol/esnap/esnap.o 00:03:09.900 CC examples/blob/cli/blobcli.o 00:03:09.900 CXX test/cpp_headers/ftl.o 00:03:09.900 LINK startup 00:03:09.900 LINK reserve 00:03:09.900 CC app/spdk_dd/spdk_dd.o 00:03:09.900 CC examples/nvme/reconnect/reconnect.o 00:03:09.900 LINK hello_world 00:03:09.900 CC app/fio/nvme/fio_plugin.o 00:03:09.900 CC examples/bdev/hello_world/hello_bdev.o 00:03:09.900 CXX test/cpp_headers/fuse_dispatcher.o 00:03:10.160 CC test/nvme/simple_copy/simple_copy.o 00:03:10.160 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:10.160 CXX test/cpp_headers/gpt_spec.o 00:03:10.160 LINK blobcli 00:03:10.160 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.160 LINK hello_bdev 00:03:10.160 LINK reconnect 00:03:10.160 LINK simple_copy 00:03:10.160 CXX test/cpp_headers/hexlify.o 00:03:10.160 LINK spdk_dd 00:03:10.419 CXX test/cpp_headers/histogram_data.o 00:03:10.419 CC examples/nvme/arbitration/arbitration.o 00:03:10.419 LINK spdk_nvme 00:03:10.419 CC app/fio/bdev/fio_plugin.o 00:03:10.419 CC examples/nvme/hotplug/hotplug.o 00:03:10.419 CC test/nvme/connect_stress/connect_stress.o 00:03:10.419 CXX test/cpp_headers/idxd.o 00:03:10.419 LINK nvme_manage 00:03:10.419 CC test/nvme/boot_partition/boot_partition.o 00:03:10.679 CXX test/cpp_headers/idxd_spec.o 00:03:10.679 LINK arbitration 00:03:10.679 LINK connect_stress 00:03:10.679 CC test/bdev/bdevio/bdevio.o 00:03:10.679 LINK hotplug 00:03:10.679 LINK boot_partition 00:03:10.679 LINK bdevperf 00:03:10.679 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:10.679 CXX test/cpp_headers/init.o 00:03:10.679 CC test/nvme/fused_ordering/fused_ordering.o 00:03:10.679 CC test/nvme/compliance/nvme_compliance.o 00:03:10.939 LINK spdk_bdev 00:03:10.939 CXX test/cpp_headers/ioat.o 00:03:10.939 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:10.939 CC test/nvme/fdp/fdp.o 00:03:10.939 LINK cmb_copy 00:03:10.939 CC test/nvme/cuse/cuse.o 00:03:10.939 LINK fused_ordering 00:03:10.939 LINK bdevio 00:03:10.939 CC examples/nvme/abort/abort.o 00:03:10.939 CXX test/cpp_headers/ioat_spec.o 00:03:10.939 LINK nvme_compliance 00:03:10.939 LINK doorbell_aers 00:03:11.199 CXX test/cpp_headers/iscsi_spec.o 00:03:11.199 CXX test/cpp_headers/json.o 00:03:11.199 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:11.199 CXX test/cpp_headers/jsonrpc.o 00:03:11.199 LINK fdp 00:03:11.199 CXX test/cpp_headers/keyring.o 00:03:11.199 CXX test/cpp_headers/keyring_module.o 00:03:11.199 CXX test/cpp_headers/likely.o 00:03:11.199 LINK abort 00:03:11.199 LINK pmr_persistence 00:03:11.199 CXX test/cpp_headers/log.o 00:03:11.199 CXX test/cpp_headers/lvol.o 00:03:11.199 CXX test/cpp_headers/md5.o 00:03:11.199 CXX test/cpp_headers/memory.o 00:03:11.199 CXX test/cpp_headers/mmio.o 00:03:11.199 CXX test/cpp_headers/net.o 00:03:11.199 CXX test/cpp_headers/nbd.o 00:03:11.462 CXX test/cpp_headers/notify.o 00:03:11.462 CXX test/cpp_headers/nvme.o 00:03:11.462 CXX test/cpp_headers/nvme_intel.o 00:03:11.462 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.462 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:11.462 CXX test/cpp_headers/nvme_spec.o 00:03:11.462 CXX test/cpp_headers/nvme_zns.o 00:03:11.462 CXX test/cpp_headers/nvmf_cmd.o 00:03:11.462 CC examples/nvmf/nvmf/nvmf.o 00:03:11.462 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:11.462 CXX test/cpp_headers/nvmf.o 00:03:11.462 CXX test/cpp_headers/nvmf_spec.o 00:03:11.462 CXX test/cpp_headers/nvmf_transport.o 00:03:11.462 CXX test/cpp_headers/opal.o 00:03:11.730 CXX test/cpp_headers/opal_spec.o 00:03:11.730 CXX test/cpp_headers/pci_ids.o 00:03:11.730 CXX test/cpp_headers/pipe.o 00:03:11.730 CXX test/cpp_headers/queue.o 00:03:11.730 CXX test/cpp_headers/reduce.o 00:03:11.730 LINK nvmf 00:03:11.730 CXX test/cpp_headers/rpc.o 00:03:11.730 CXX test/cpp_headers/scheduler.o 00:03:11.730 CXX test/cpp_headers/scsi.o 00:03:11.730 CXX test/cpp_headers/scsi_spec.o 00:03:11.730 CXX test/cpp_headers/sock.o 00:03:11.990 CXX test/cpp_headers/stdinc.o 00:03:11.990 CXX test/cpp_headers/string.o 00:03:11.990 CXX test/cpp_headers/thread.o 00:03:11.990 CXX test/cpp_headers/trace.o 00:03:11.990 CXX test/cpp_headers/trace_parser.o 00:03:11.990 CXX test/cpp_headers/tree.o 00:03:11.990 LINK cuse 00:03:11.990 CXX test/cpp_headers/ublk.o 00:03:11.990 CXX test/cpp_headers/util.o 00:03:11.990 CXX test/cpp_headers/uuid.o 00:03:11.990 CXX test/cpp_headers/version.o 00:03:11.990 CXX test/cpp_headers/vfio_user_pci.o 00:03:11.990 CXX test/cpp_headers/vfio_user_spec.o 00:03:11.990 CXX test/cpp_headers/vhost.o 00:03:11.990 CXX test/cpp_headers/vmd.o 00:03:11.990 CXX test/cpp_headers/xor.o 00:03:12.251 CXX test/cpp_headers/zipf.o 00:03:13.635 LINK esnap 00:03:13.893 00:03:13.893 real 1m21.095s 00:03:13.893 user 7m7.239s 00:03:13.893 sys 1m24.386s 00:03:13.893 20:26:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:13.893 ************************************ 00:03:13.893 END TEST make 00:03:13.893 20:26:28 make -- common/autotest_common.sh@10 -- $ set +x 00:03:13.893 ************************************ 00:03:13.893 20:26:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:13.893 20:26:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:13.893 20:26:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:13.893 20:26:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.893 20:26:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:13.893 20:26:28 -- pm/common@44 -- $ pid=5024 00:03:13.893 20:26:28 -- pm/common@50 -- $ kill -TERM 5024 00:03:13.893 20:26:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.893 20:26:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:13.893 20:26:28 -- pm/common@44 -- $ pid=5026 00:03:13.893 20:26:28 -- pm/common@50 -- $ kill -TERM 5026 00:03:13.893 20:26:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:13.893 20:26:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:13.893 20:26:28 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:13.893 20:26:28 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:13.893 20:26:28 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:14.151 20:26:28 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:14.151 20:26:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:14.151 20:26:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:14.151 20:26:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:14.151 20:26:28 -- scripts/common.sh@336 -- # IFS=.-: 00:03:14.151 20:26:28 -- scripts/common.sh@336 -- # read -ra ver1 00:03:14.151 20:26:28 -- scripts/common.sh@337 -- # IFS=.-: 00:03:14.151 20:26:28 -- scripts/common.sh@337 -- # read -ra ver2 00:03:14.151 20:26:28 -- scripts/common.sh@338 -- # local 'op=<' 00:03:14.151 20:26:28 -- scripts/common.sh@340 -- # ver1_l=2 00:03:14.151 20:26:28 -- scripts/common.sh@341 -- # ver2_l=1 00:03:14.151 20:26:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:14.151 20:26:28 -- scripts/common.sh@344 -- # case "$op" in 00:03:14.151 20:26:28 -- scripts/common.sh@345 -- # : 1 00:03:14.151 20:26:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:14.151 20:26:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.151 20:26:28 -- scripts/common.sh@365 -- # decimal 1 00:03:14.151 20:26:28 -- scripts/common.sh@353 -- # local d=1 00:03:14.151 20:26:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:14.151 20:26:28 -- scripts/common.sh@355 -- # echo 1 00:03:14.151 20:26:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:14.151 20:26:28 -- scripts/common.sh@366 -- # decimal 2 00:03:14.151 20:26:28 -- scripts/common.sh@353 -- # local d=2 00:03:14.151 20:26:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:14.151 20:26:28 -- scripts/common.sh@355 -- # echo 2 00:03:14.151 20:26:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:14.151 20:26:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:14.151 20:26:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:14.151 20:26:28 -- scripts/common.sh@368 -- # return 0 00:03:14.151 20:26:28 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:14.151 20:26:28 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:14.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.151 --rc genhtml_branch_coverage=1 00:03:14.151 --rc genhtml_function_coverage=1 00:03:14.151 --rc genhtml_legend=1 00:03:14.151 --rc geninfo_all_blocks=1 00:03:14.151 --rc geninfo_unexecuted_blocks=1 00:03:14.151 00:03:14.151 ' 00:03:14.151 20:26:28 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:14.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.152 --rc genhtml_branch_coverage=1 00:03:14.152 --rc genhtml_function_coverage=1 00:03:14.152 --rc genhtml_legend=1 00:03:14.152 --rc geninfo_all_blocks=1 00:03:14.152 --rc geninfo_unexecuted_blocks=1 00:03:14.152 00:03:14.152 ' 00:03:14.152 20:26:28 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:14.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.152 --rc genhtml_branch_coverage=1 00:03:14.152 --rc genhtml_function_coverage=1 00:03:14.152 --rc genhtml_legend=1 00:03:14.152 --rc geninfo_all_blocks=1 00:03:14.152 --rc geninfo_unexecuted_blocks=1 00:03:14.152 00:03:14.152 ' 00:03:14.152 20:26:28 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:14.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.152 --rc genhtml_branch_coverage=1 00:03:14.152 --rc genhtml_function_coverage=1 00:03:14.152 --rc genhtml_legend=1 00:03:14.152 --rc geninfo_all_blocks=1 00:03:14.152 --rc geninfo_unexecuted_blocks=1 00:03:14.152 00:03:14.152 ' 00:03:14.152 20:26:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:14.152 20:26:28 -- nvmf/common.sh@7 -- # uname -s 00:03:14.152 20:26:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:14.152 20:26:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:14.152 20:26:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:14.152 20:26:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:14.152 20:26:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:14.152 20:26:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:14.152 20:26:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:14.152 20:26:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:14.152 20:26:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:14.152 20:26:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:14.152 20:26:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:03:14.152 20:26:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:03:14.152 20:26:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:14.152 20:26:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:14.152 20:26:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:14.152 20:26:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:14.152 20:26:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:14.152 20:26:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:14.152 20:26:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:14.152 20:26:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:14.152 20:26:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:14.152 20:26:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.152 20:26:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.152 20:26:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.152 20:26:28 -- paths/export.sh@5 -- # export PATH 00:03:14.152 20:26:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.152 20:26:28 -- nvmf/common.sh@51 -- # : 0 00:03:14.152 20:26:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:14.152 20:26:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:14.152 20:26:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:14.152 20:26:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:14.152 20:26:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:14.152 20:26:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:14.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:14.152 20:26:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:14.152 20:26:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:14.152 20:26:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:14.152 20:26:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:14.152 20:26:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:14.152 20:26:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:14.152 20:26:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:14.152 20:26:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:14.152 20:26:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:14.152 20:26:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:14.152 20:26:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:14.152 20:26:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:14.152 20:26:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:14.152 20:26:28 -- spdk/autotest.sh@48 -- # udevadm_pid=53993 00:03:14.152 20:26:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:14.152 20:26:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:14.152 20:26:28 -- pm/common@17 -- # local monitor 00:03:14.152 20:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.152 20:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.152 20:26:28 -- pm/common@25 -- # sleep 1 00:03:14.152 20:26:28 -- pm/common@21 -- # date +%s 00:03:14.152 20:26:28 -- pm/common@21 -- # date +%s 00:03:14.152 20:26:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652788 00:03:14.152 20:26:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652788 00:03:14.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652788_collect-cpu-load.pm.log 00:03:14.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652788_collect-vmstat.pm.log 00:03:15.083 20:26:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:15.083 20:26:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:15.083 20:26:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.083 20:26:29 -- common/autotest_common.sh@10 -- # set +x 00:03:15.083 20:26:29 -- spdk/autotest.sh@59 -- # create_test_list 00:03:15.083 20:26:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:15.083 20:26:29 -- common/autotest_common.sh@10 -- # set +x 00:03:15.083 20:26:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:15.083 20:26:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:15.083 20:26:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:15.083 20:26:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:15.083 20:26:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:15.083 20:26:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:15.083 20:26:29 -- common/autotest_common.sh@1457 -- # uname 00:03:15.083 20:26:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:15.083 20:26:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:15.083 20:26:29 -- common/autotest_common.sh@1477 -- # uname 00:03:15.083 20:26:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:15.083 20:26:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:15.083 20:26:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:15.083 lcov: LCOV version 1.15 00:03:15.083 20:26:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:30.147 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:30.147 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:45.050 20:26:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:45.050 20:26:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.050 20:26:57 -- common/autotest_common.sh@10 -- # set +x 00:03:45.050 20:26:57 -- spdk/autotest.sh@78 -- # rm -f 00:03:45.050 20:26:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.050 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:45.050 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:45.050 20:26:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:45.050 20:26:57 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:45.050 20:26:57 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:45.050 20:26:57 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:45.050 20:26:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.050 20:26:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:45.050 20:26:57 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:45.050 20:26:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.050 20:26:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.050 20:26:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.050 20:26:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:45.050 20:26:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:45.050 20:26:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:45.050 20:26:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.050 20:26:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.050 20:26:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:45.050 20:26:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:45.050 20:26:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:45.050 20:26:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.050 20:26:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.050 20:26:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:45.050 20:26:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:45.050 20:26:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:45.050 20:26:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.050 20:26:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:45.050 20:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.050 20:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.050 20:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:45.050 20:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:45.050 20:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:45.050 No valid GPT data, bailing 00:03:45.050 20:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:45.050 20:26:57 -- scripts/common.sh@394 -- # pt= 00:03:45.050 20:26:57 -- scripts/common.sh@395 -- # return 1 00:03:45.050 20:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:45.050 1+0 records in 00:03:45.050 1+0 records out 00:03:45.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415234 s, 253 MB/s 00:03:45.050 20:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.050 20:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.050 20:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:45.050 20:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:45.050 20:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:45.050 No valid GPT data, bailing 00:03:45.050 20:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:45.050 20:26:57 -- scripts/common.sh@394 -- # pt= 00:03:45.050 20:26:57 -- scripts/common.sh@395 -- # return 1 00:03:45.050 20:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:45.050 1+0 records in 00:03:45.050 1+0 records out 00:03:45.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450877 s, 233 MB/s 00:03:45.050 20:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.050 20:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.050 20:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:45.050 20:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:45.050 20:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:45.050 No valid GPT data, bailing 00:03:45.050 20:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:45.050 20:26:57 -- scripts/common.sh@394 -- # pt= 00:03:45.050 20:26:57 -- scripts/common.sh@395 -- # return 1 00:03:45.050 20:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:45.050 1+0 records in 00:03:45.050 1+0 records out 00:03:45.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387089 s, 271 MB/s 00:03:45.051 20:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.051 20:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.051 20:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:45.051 20:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:45.051 20:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:45.051 No valid GPT data, bailing 00:03:45.051 20:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:45.051 20:26:57 -- scripts/common.sh@394 -- # pt= 00:03:45.051 20:26:57 -- scripts/common.sh@395 -- # return 1 00:03:45.051 20:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:45.051 1+0 records in 00:03:45.051 1+0 records out 00:03:45.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378714 s, 277 MB/s 00:03:45.051 20:26:57 -- spdk/autotest.sh@105 -- # sync 00:03:45.051 20:26:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:45.051 20:26:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:45.051 20:26:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.051 20:26:59 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.051 20:26:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.051 20:26:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.051 20:26:59 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:45.622 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.622 Hugepages 00:03:45.622 node hugesize free / total 00:03:45.622 node0 1048576kB 0 / 0 00:03:45.622 node0 2048kB 0 / 0 00:03:45.622 00:03:45.622 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.622 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:45.622 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:45.622 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:45.622 20:27:00 -- spdk/autotest.sh@117 -- # uname -s 00:03:45.622 20:27:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:45.622 20:27:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:45.622 20:27:00 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.194 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.454 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.454 20:27:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:47.397 20:27:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:47.397 20:27:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:47.397 20:27:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:47.397 20:27:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:47.397 20:27:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:47.397 20:27:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:47.397 20:27:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.397 20:27:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:47.397 20:27:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:47.397 20:27:01 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:47.397 20:27:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:47.397 20:27:01 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.659 Waiting for block devices as requested 00:03:47.920 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:47.920 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:47.920 20:27:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:47.920 20:27:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:47.920 20:27:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:47.920 20:27:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:47.920 20:27:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:47.920 20:27:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1543 -- # continue 00:03:47.920 20:27:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:47.920 20:27:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:47.920 20:27:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:47.920 20:27:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:47.920 20:27:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:47.920 20:27:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:47.920 20:27:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.920 20:27:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:47.920 20:27:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:47.920 20:27:02 -- common/autotest_common.sh@1543 -- # continue 00:03:47.920 20:27:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:47.920 20:27:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.920 20:27:02 -- common/autotest_common.sh@10 -- # set +x 00:03:47.920 20:27:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:47.920 20:27:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.920 20:27:02 -- common/autotest_common.sh@10 -- # set +x 00:03:47.920 20:27:02 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.552 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:48.552 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:48.815 20:27:03 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:48.815 20:27:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.815 20:27:03 -- common/autotest_common.sh@10 -- # set +x 00:03:48.815 20:27:03 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:48.815 20:27:03 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:48.815 20:27:03 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:48.815 20:27:03 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:48.815 20:27:03 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:48.815 20:27:03 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:48.815 20:27:03 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:48.815 20:27:03 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:48.815 20:27:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:48.815 20:27:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:48.815 20:27:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.815 20:27:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.815 20:27:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:48.815 20:27:03 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:48.815 20:27:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:48.815 20:27:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:48.815 20:27:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:48.815 20:27:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:48.815 20:27:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:48.815 20:27:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:48.815 20:27:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:48.815 20:27:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:48.815 20:27:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:48.815 20:27:03 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:48.815 20:27:03 -- common/autotest_common.sh@1572 -- # return 0 00:03:48.815 20:27:03 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:48.815 20:27:03 -- common/autotest_common.sh@1580 -- # return 0 00:03:48.815 20:27:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:48.815 20:27:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:48.815 20:27:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:48.815 20:27:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:48.815 20:27:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:48.815 20:27:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.815 20:27:03 -- common/autotest_common.sh@10 -- # set +x 00:03:48.815 20:27:03 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:03:48.815 20:27:03 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:03:48.815 20:27:03 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:03:48.815 20:27:03 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:48.815 20:27:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.815 20:27:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.815 20:27:03 -- common/autotest_common.sh@10 -- # set +x 00:03:48.815 ************************************ 00:03:48.815 START TEST env 00:03:48.815 ************************************ 00:03:48.815 20:27:03 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:48.815 * Looking for test storage... 00:03:48.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:48.815 20:27:03 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.815 20:27:03 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.815 20:27:03 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.815 20:27:03 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.815 20:27:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.077 20:27:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.077 20:27:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.077 20:27:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.077 20:27:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.077 20:27:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.077 20:27:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.077 20:27:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.077 20:27:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.077 20:27:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.078 20:27:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.078 20:27:03 env -- scripts/common.sh@344 -- # case "$op" in 00:03:49.078 20:27:03 env -- scripts/common.sh@345 -- # : 1 00:03:49.078 20:27:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.078 20:27:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.078 20:27:03 env -- scripts/common.sh@365 -- # decimal 1 00:03:49.078 20:27:03 env -- scripts/common.sh@353 -- # local d=1 00:03:49.078 20:27:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.078 20:27:03 env -- scripts/common.sh@355 -- # echo 1 00:03:49.078 20:27:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.078 20:27:03 env -- scripts/common.sh@366 -- # decimal 2 00:03:49.078 20:27:03 env -- scripts/common.sh@353 -- # local d=2 00:03:49.078 20:27:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.078 20:27:03 env -- scripts/common.sh@355 -- # echo 2 00:03:49.078 20:27:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.078 20:27:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.078 20:27:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.078 20:27:03 env -- scripts/common.sh@368 -- # return 0 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.078 --rc genhtml_branch_coverage=1 00:03:49.078 --rc genhtml_function_coverage=1 00:03:49.078 --rc genhtml_legend=1 00:03:49.078 --rc geninfo_all_blocks=1 00:03:49.078 --rc geninfo_unexecuted_blocks=1 00:03:49.078 00:03:49.078 ' 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.078 --rc genhtml_branch_coverage=1 00:03:49.078 --rc genhtml_function_coverage=1 00:03:49.078 --rc genhtml_legend=1 00:03:49.078 --rc geninfo_all_blocks=1 00:03:49.078 --rc geninfo_unexecuted_blocks=1 00:03:49.078 00:03:49.078 ' 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.078 --rc genhtml_branch_coverage=1 00:03:49.078 --rc genhtml_function_coverage=1 00:03:49.078 --rc genhtml_legend=1 00:03:49.078 --rc geninfo_all_blocks=1 00:03:49.078 --rc geninfo_unexecuted_blocks=1 00:03:49.078 00:03:49.078 ' 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.078 --rc genhtml_branch_coverage=1 00:03:49.078 --rc genhtml_function_coverage=1 00:03:49.078 --rc genhtml_legend=1 00:03:49.078 --rc geninfo_all_blocks=1 00:03:49.078 --rc geninfo_unexecuted_blocks=1 00:03:49.078 00:03:49.078 ' 00:03:49.078 20:27:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.078 20:27:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.078 ************************************ 00:03:49.078 START TEST env_memory 00:03:49.078 ************************************ 00:03:49.078 20:27:03 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:49.078 00:03:49.078 00:03:49.078 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.078 http://cunit.sourceforge.net/ 00:03:49.078 00:03:49.078 00:03:49.078 Suite: memory 00:03:49.078 Test: alloc and free memory map ...[2024-11-26 20:27:03.419715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:49.078 passed 00:03:49.078 Test: mem map translation ...[2024-11-26 20:27:03.443162] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:49.078 [2024-11-26 20:27:03.443204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:49.078 [2024-11-26 20:27:03.443246] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:49.078 [2024-11-26 20:27:03.443253] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:49.078 passed 00:03:49.078 Test: mem map registration ...[2024-11-26 20:27:03.494888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:49.078 [2024-11-26 20:27:03.494936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:49.078 passed 00:03:49.078 Test: mem map adjacent registrations ...passed 00:03:49.078 00:03:49.078 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.078 suites 1 1 n/a 0 0 00:03:49.078 tests 4 4 4 0 0 00:03:49.078 asserts 152 152 152 0 n/a 00:03:49.078 00:03:49.078 Elapsed time = 0.168 seconds 00:03:49.078 00:03:49.078 real 0m0.178s 00:03:49.078 user 0m0.170s 00:03:49.078 sys 0m0.006s 00:03:49.078 20:27:03 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.078 20:27:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:49.078 ************************************ 00:03:49.078 END TEST env_memory 00:03:49.078 ************************************ 00:03:49.078 20:27:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.078 20:27:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.078 20:27:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.078 ************************************ 00:03:49.078 START TEST env_vtophys 00:03:49.078 ************************************ 00:03:49.078 20:27:03 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:49.078 EAL: lib.eal log level changed from notice to debug 00:03:49.078 EAL: Detected lcore 0 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 1 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 2 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 3 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 4 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 5 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 6 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 7 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 8 as core 0 on socket 0 00:03:49.078 EAL: Detected lcore 9 as core 0 on socket 0 00:03:49.078 EAL: Maximum logical cores by configuration: 128 00:03:49.078 EAL: Detected CPU lcores: 10 00:03:49.078 EAL: Detected NUMA nodes: 1 00:03:49.078 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:49.078 EAL: Detected shared linkage of DPDK 00:03:49.078 EAL: No shared files mode enabled, IPC will be disabled 00:03:49.078 EAL: Selected IOVA mode 'PA' 00:03:49.078 EAL: Probing VFIO support... 00:03:49.078 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:49.340 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:49.340 EAL: Ask a virtual area of 0x2e000 bytes 00:03:49.340 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:49.340 EAL: Setting up physically contiguous memory... 00:03:49.340 EAL: Setting maximum number of open files to 524288 00:03:49.340 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:49.340 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:49.340 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.340 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:49.340 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.340 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.340 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:49.340 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:49.340 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.340 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:49.340 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.340 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.340 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:49.340 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:49.340 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.340 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:49.340 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.340 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.340 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:49.340 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:49.340 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.340 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:49.340 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.340 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.340 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:49.340 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:49.340 EAL: Hugepages will be freed exactly as allocated. 00:03:49.340 EAL: No shared files mode enabled, IPC is disabled 00:03:49.340 EAL: No shared files mode enabled, IPC is disabled 00:03:49.340 EAL: TSC frequency is ~2600000 KHz 00:03:49.340 EAL: Main lcore 0 is ready (tid=7f77fefe0a00;cpuset=[0]) 00:03:49.340 EAL: Trying to obtain current memory policy. 00:03:49.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.340 EAL: Restoring previous memory policy: 0 00:03:49.340 EAL: request: mp_malloc_sync 00:03:49.340 EAL: No shared files mode enabled, IPC is disabled 00:03:49.340 EAL: Heap on socket 0 was expanded by 2MB 00:03:49.340 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:49.340 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:49.340 EAL: Mem event callback 'spdk:(nil)' registered 00:03:49.340 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:49.340 00:03:49.340 00:03:49.340 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.340 http://cunit.sourceforge.net/ 00:03:49.340 00:03:49.340 00:03:49.340 Suite: components_suite 00:03:49.340 Test: vtophys_malloc_test ...passed 00:03:49.340 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:49.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.340 EAL: Restoring previous memory policy: 4 00:03:49.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.340 EAL: request: mp_malloc_sync 00:03:49.340 EAL: No shared files mode enabled, IPC is disabled 00:03:49.340 EAL: Heap on socket 0 was expanded by 4MB 00:03:49.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.340 EAL: request: mp_malloc_sync 00:03:49.340 EAL: No shared files mode enabled, IPC is disabled 00:03:49.340 EAL: Heap on socket 0 was shrunk by 4MB 00:03:49.340 EAL: Trying to obtain current memory policy. 00:03:49.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.340 EAL: Restoring previous memory policy: 4 00:03:49.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.340 EAL: request: mp_malloc_sync 00:03:49.340 EAL: No shared files mode enabled, IPC is disabled 00:03:49.340 EAL: Heap on socket 0 was expanded by 6MB 00:03:49.340 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.340 EAL: request: mp_malloc_sync 00:03:49.340 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was shrunk by 6MB 00:03:49.341 EAL: Trying to obtain current memory policy. 00:03:49.341 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.341 EAL: Restoring previous memory policy: 4 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was expanded by 10MB 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was shrunk by 10MB 00:03:49.341 EAL: Trying to obtain current memory policy. 00:03:49.341 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.341 EAL: Restoring previous memory policy: 4 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was expanded by 18MB 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was shrunk by 18MB 00:03:49.341 EAL: Trying to obtain current memory policy. 00:03:49.341 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.341 EAL: Restoring previous memory policy: 4 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was expanded by 34MB 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was shrunk by 34MB 00:03:49.341 EAL: Trying to obtain current memory policy. 00:03:49.341 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.341 EAL: Restoring previous memory policy: 4 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was expanded by 66MB 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was shrunk by 66MB 00:03:49.341 EAL: Trying to obtain current memory policy. 00:03:49.341 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.341 EAL: Restoring previous memory policy: 4 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was expanded by 130MB 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was shrunk by 130MB 00:03:49.341 EAL: Trying to obtain current memory policy. 00:03:49.341 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.341 EAL: Restoring previous memory policy: 4 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.341 EAL: request: mp_malloc_sync 00:03:49.341 EAL: No shared files mode enabled, IPC is disabled 00:03:49.341 EAL: Heap on socket 0 was expanded by 258MB 00:03:49.341 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.602 EAL: request: mp_malloc_sync 00:03:49.602 EAL: No shared files mode enabled, IPC is disabled 00:03:49.602 EAL: Heap on socket 0 was shrunk by 258MB 00:03:49.602 EAL: Trying to obtain current memory policy. 00:03:49.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.602 EAL: Restoring previous memory policy: 4 00:03:49.602 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.602 EAL: request: mp_malloc_sync 00:03:49.602 EAL: No shared files mode enabled, IPC is disabled 00:03:49.602 EAL: Heap on socket 0 was expanded by 514MB 00:03:49.602 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.602 EAL: request: mp_malloc_sync 00:03:49.602 EAL: No shared files mode enabled, IPC is disabled 00:03:49.602 EAL: Heap on socket 0 was shrunk by 514MB 00:03:49.602 EAL: Trying to obtain current memory policy. 00:03:49.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.863 EAL: Restoring previous memory policy: 4 00:03:49.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.863 EAL: request: mp_malloc_sync 00:03:49.863 EAL: No shared files mode enabled, IPC is disabled 00:03:49.863 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.864 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.123 passed 00:03:50.123 00:03:50.123 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.123 suites 1 1 n/a 0 0 00:03:50.123 tests 2 2 2 0 0 00:03:50.123 asserts 5463 5463 5463 0 n/a 00:03:50.123 00:03:50.123 Elapsed time = 0.642 seconds 00:03:50.123 EAL: request: mp_malloc_sync 00:03:50.123 EAL: No shared files mode enabled, IPC is disabled 00:03:50.123 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:50.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.123 EAL: request: mp_malloc_sync 00:03:50.124 EAL: No shared files mode enabled, IPC is disabled 00:03:50.124 EAL: Heap on socket 0 was shrunk by 2MB 00:03:50.124 EAL: No shared files mode enabled, IPC is disabled 00:03:50.124 EAL: No shared files mode enabled, IPC is disabled 00:03:50.124 EAL: No shared files mode enabled, IPC is disabled 00:03:50.124 00:03:50.124 real 0m0.824s 00:03:50.124 user 0m0.399s 00:03:50.124 sys 0m0.302s 00:03:50.124 20:27:04 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.124 ************************************ 00:03:50.124 END TEST env_vtophys 00:03:50.124 20:27:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:50.124 ************************************ 00:03:50.124 20:27:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:50.124 20:27:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.124 20:27:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.124 20:27:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.124 ************************************ 00:03:50.124 START TEST env_pci 00:03:50.124 ************************************ 00:03:50.124 20:27:04 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:50.124 00:03:50.124 00:03:50.124 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.124 http://cunit.sourceforge.net/ 00:03:50.124 00:03:50.124 00:03:50.124 Suite: pci 00:03:50.124 Test: pci_hook ...[2024-11-26 20:27:04.476955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56158 has claimed it 00:03:50.124 passed 00:03:50.124 00:03:50.124 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.124 suites 1 1 n/a 0 0 00:03:50.124 tests 1 1 1 0 0 00:03:50.124 asserts 25 25 25 0 n/a 00:03:50.124 00:03:50.124 Elapsed time = 0.002 seconds 00:03:50.124 EAL: Cannot find device (10000:00:01.0) 00:03:50.124 EAL: Failed to attach device on primary process 00:03:50.124 00:03:50.124 real 0m0.015s 00:03:50.124 user 0m0.006s 00:03:50.124 sys 0m0.009s 00:03:50.124 20:27:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.124 20:27:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:50.124 ************************************ 00:03:50.124 END TEST env_pci 00:03:50.124 ************************************ 00:03:50.124 20:27:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:50.124 20:27:04 env -- env/env.sh@15 -- # uname 00:03:50.124 20:27:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:50.124 20:27:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:50.124 20:27:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:50.124 20:27:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:50.124 20:27:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.124 20:27:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.124 ************************************ 00:03:50.124 START TEST env_dpdk_post_init 00:03:50.124 ************************************ 00:03:50.124 20:27:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:50.124 EAL: Detected CPU lcores: 10 00:03:50.124 EAL: Detected NUMA nodes: 1 00:03:50.124 EAL: Detected shared linkage of DPDK 00:03:50.124 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:50.124 EAL: Selected IOVA mode 'PA' 00:03:50.124 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:50.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:50.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:50.384 Starting DPDK initialization... 00:03:50.384 Starting SPDK post initialization... 00:03:50.384 SPDK NVMe probe 00:03:50.384 Attaching to 0000:00:10.0 00:03:50.384 Attaching to 0000:00:11.0 00:03:50.384 Attached to 0000:00:10.0 00:03:50.384 Attached to 0000:00:11.0 00:03:50.384 Cleaning up... 00:03:50.384 00:03:50.384 real 0m0.174s 00:03:50.384 user 0m0.042s 00:03:50.384 sys 0m0.032s 00:03:50.384 20:27:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.384 20:27:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:50.384 ************************************ 00:03:50.384 END TEST env_dpdk_post_init 00:03:50.384 ************************************ 00:03:50.384 20:27:04 env -- env/env.sh@26 -- # uname 00:03:50.384 20:27:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:50.384 20:27:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.384 20:27:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.384 20:27:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.384 20:27:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.384 ************************************ 00:03:50.384 START TEST env_mem_callbacks 00:03:50.384 ************************************ 00:03:50.384 20:27:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.384 EAL: Detected CPU lcores: 10 00:03:50.384 EAL: Detected NUMA nodes: 1 00:03:50.384 EAL: Detected shared linkage of DPDK 00:03:50.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:50.384 EAL: Selected IOVA mode 'PA' 00:03:50.384 00:03:50.384 00:03:50.384 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.384 http://cunit.sourceforge.net/ 00:03:50.384 00:03:50.384 00:03:50.384 Suite: memory 00:03:50.384 Test: test ... 00:03:50.384 register 0x200000200000 2097152 00:03:50.384 malloc 3145728 00:03:50.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:50.384 register 0x200000400000 4194304 00:03:50.384 buf 0x200000500000 len 3145728 PASSED 00:03:50.384 malloc 64 00:03:50.384 buf 0x2000004fff40 len 64 PASSED 00:03:50.384 malloc 4194304 00:03:50.384 register 0x200000800000 6291456 00:03:50.384 buf 0x200000a00000 len 4194304 PASSED 00:03:50.384 free 0x200000500000 3145728 00:03:50.384 free 0x2000004fff40 64 00:03:50.384 unregister 0x200000400000 4194304 PASSED 00:03:50.384 free 0x200000a00000 4194304 00:03:50.384 unregister 0x200000800000 6291456 PASSED 00:03:50.384 malloc 8388608 00:03:50.384 register 0x200000400000 10485760 00:03:50.384 buf 0x200000600000 len 8388608 PASSED 00:03:50.384 free 0x200000600000 8388608 00:03:50.384 unregister 0x200000400000 10485760 PASSED 00:03:50.384 passed 00:03:50.384 00:03:50.384 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.384 suites 1 1 n/a 0 0 00:03:50.384 tests 1 1 1 0 0 00:03:50.384 asserts 15 15 15 0 n/a 00:03:50.384 00:03:50.384 Elapsed time = 0.006 seconds 00:03:50.384 00:03:50.384 real 0m0.132s 00:03:50.384 user 0m0.009s 00:03:50.384 sys 0m0.022s 00:03:50.384 20:27:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.384 ************************************ 00:03:50.384 20:27:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:50.384 END TEST env_mem_callbacks 00:03:50.384 ************************************ 00:03:50.384 00:03:50.384 real 0m1.679s 00:03:50.384 user 0m0.785s 00:03:50.384 sys 0m0.558s 00:03:50.384 20:27:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.384 ************************************ 00:03:50.384 END TEST env 00:03:50.384 ************************************ 00:03:50.384 20:27:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.645 20:27:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:50.645 20:27:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.645 20:27:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.645 20:27:04 -- common/autotest_common.sh@10 -- # set +x 00:03:50.645 ************************************ 00:03:50.645 START TEST rpc 00:03:50.645 ************************************ 00:03:50.645 20:27:04 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:50.645 * Looking for test storage... 00:03:50.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:50.645 20:27:05 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.645 20:27:05 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.645 20:27:05 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.645 20:27:05 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.645 20:27:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.645 20:27:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.645 20:27:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.645 20:27:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.645 20:27:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.645 20:27:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.645 20:27:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.645 20:27:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.645 20:27:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.645 20:27:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.645 20:27:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.645 20:27:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.645 20:27:05 rpc -- scripts/common.sh@345 -- # : 1 00:03:50.645 20:27:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.645 20:27:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.645 20:27:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.645 20:27:05 rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.646 20:27:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.646 20:27:05 rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.646 20:27:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.646 20:27:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.646 20:27:05 rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.646 20:27:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.646 20:27:05 rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.646 20:27:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.646 20:27:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.646 20:27:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.646 20:27:05 rpc -- scripts/common.sh@368 -- # return 0 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.646 --rc genhtml_branch_coverage=1 00:03:50.646 --rc genhtml_function_coverage=1 00:03:50.646 --rc genhtml_legend=1 00:03:50.646 --rc geninfo_all_blocks=1 00:03:50.646 --rc geninfo_unexecuted_blocks=1 00:03:50.646 00:03:50.646 ' 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.646 --rc genhtml_branch_coverage=1 00:03:50.646 --rc genhtml_function_coverage=1 00:03:50.646 --rc genhtml_legend=1 00:03:50.646 --rc geninfo_all_blocks=1 00:03:50.646 --rc geninfo_unexecuted_blocks=1 00:03:50.646 00:03:50.646 ' 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.646 --rc genhtml_branch_coverage=1 00:03:50.646 --rc genhtml_function_coverage=1 00:03:50.646 --rc genhtml_legend=1 00:03:50.646 --rc geninfo_all_blocks=1 00:03:50.646 --rc geninfo_unexecuted_blocks=1 00:03:50.646 00:03:50.646 ' 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.646 --rc genhtml_branch_coverage=1 00:03:50.646 --rc genhtml_function_coverage=1 00:03:50.646 --rc genhtml_legend=1 00:03:50.646 --rc geninfo_all_blocks=1 00:03:50.646 --rc geninfo_unexecuted_blocks=1 00:03:50.646 00:03:50.646 ' 00:03:50.646 20:27:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56270 00:03:50.646 20:27:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.646 20:27:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56270 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 56270 ']' 00:03:50.646 20:27:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.646 20:27:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.646 [2024-11-26 20:27:05.125702] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:03:50.646 [2024-11-26 20:27:05.125767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56270 ] 00:03:50.907 [2024-11-26 20:27:05.266383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.907 [2024-11-26 20:27:05.303202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:50.907 [2024-11-26 20:27:05.303250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56270' to capture a snapshot of events at runtime. 00:03:50.907 [2024-11-26 20:27:05.303258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.907 [2024-11-26 20:27:05.303264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.907 [2024-11-26 20:27:05.303270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56270 for offline analysis/debug. 00:03:50.907 [2024-11-26 20:27:05.303534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.907 [2024-11-26 20:27:05.351362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:03:51.169 20:27:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.169 20:27:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:51.169 20:27:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:51.169 20:27:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:51.169 20:27:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:51.169 20:27:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:51.169 20:27:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.169 20:27:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.169 20:27:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 ************************************ 00:03:51.169 START TEST rpc_integrity 00:03:51.169 ************************************ 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.169 { 00:03:51.169 "name": "Malloc0", 00:03:51.169 "aliases": [ 00:03:51.169 "b91c3688-8a2e-41e2-a5d3-503f5248ab43" 00:03:51.169 ], 00:03:51.169 "product_name": "Malloc disk", 00:03:51.169 "block_size": 512, 00:03:51.169 "num_blocks": 16384, 00:03:51.169 "uuid": "b91c3688-8a2e-41e2-a5d3-503f5248ab43", 00:03:51.169 "assigned_rate_limits": { 00:03:51.169 "rw_ios_per_sec": 0, 00:03:51.169 "rw_mbytes_per_sec": 0, 00:03:51.169 "r_mbytes_per_sec": 0, 00:03:51.169 "w_mbytes_per_sec": 0 00:03:51.169 }, 00:03:51.169 "claimed": false, 00:03:51.169 "zoned": false, 00:03:51.169 "supported_io_types": { 00:03:51.169 "read": true, 00:03:51.169 "write": true, 00:03:51.169 "unmap": true, 00:03:51.169 "flush": true, 00:03:51.169 "reset": true, 00:03:51.169 "nvme_admin": false, 00:03:51.169 "nvme_io": false, 00:03:51.169 "nvme_io_md": false, 00:03:51.169 "write_zeroes": true, 00:03:51.169 "zcopy": true, 00:03:51.169 "get_zone_info": false, 00:03:51.169 "zone_management": false, 00:03:51.169 "zone_append": false, 00:03:51.169 "compare": false, 00:03:51.169 "compare_and_write": false, 00:03:51.169 "abort": true, 00:03:51.169 "seek_hole": false, 00:03:51.169 "seek_data": false, 00:03:51.169 "copy": true, 00:03:51.169 "nvme_iov_md": false 00:03:51.169 }, 00:03:51.169 "memory_domains": [ 00:03:51.169 { 00:03:51.169 "dma_device_id": "system", 00:03:51.169 "dma_device_type": 1 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.169 "dma_device_type": 2 00:03:51.169 } 00:03:51.169 ], 00:03:51.169 "driver_specific": {} 00:03:51.169 } 00:03:51.169 ]' 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 [2024-11-26 20:27:05.617920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:51.169 [2024-11-26 20:27:05.617964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.169 [2024-11-26 20:27:05.617977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa6b050 00:03:51.169 [2024-11-26 20:27:05.617983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.169 [2024-11-26 20:27:05.619380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.169 [2024-11-26 20:27:05.619410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.169 Passthru0 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.169 { 00:03:51.169 "name": "Malloc0", 00:03:51.169 "aliases": [ 00:03:51.169 "b91c3688-8a2e-41e2-a5d3-503f5248ab43" 00:03:51.169 ], 00:03:51.169 "product_name": "Malloc disk", 00:03:51.169 "block_size": 512, 00:03:51.169 "num_blocks": 16384, 00:03:51.169 "uuid": "b91c3688-8a2e-41e2-a5d3-503f5248ab43", 00:03:51.169 "assigned_rate_limits": { 00:03:51.169 "rw_ios_per_sec": 0, 00:03:51.169 "rw_mbytes_per_sec": 0, 00:03:51.169 "r_mbytes_per_sec": 0, 00:03:51.169 "w_mbytes_per_sec": 0 00:03:51.169 }, 00:03:51.169 "claimed": true, 00:03:51.169 "claim_type": "exclusive_write", 00:03:51.169 "zoned": false, 00:03:51.169 "supported_io_types": { 00:03:51.169 "read": true, 00:03:51.169 "write": true, 00:03:51.169 "unmap": true, 00:03:51.169 "flush": true, 00:03:51.169 "reset": true, 00:03:51.169 "nvme_admin": false, 00:03:51.169 "nvme_io": false, 00:03:51.169 "nvme_io_md": false, 00:03:51.169 "write_zeroes": true, 00:03:51.169 "zcopy": true, 00:03:51.169 "get_zone_info": false, 00:03:51.169 "zone_management": false, 00:03:51.169 "zone_append": false, 00:03:51.169 "compare": false, 00:03:51.169 "compare_and_write": false, 00:03:51.169 "abort": true, 00:03:51.169 "seek_hole": false, 00:03:51.169 "seek_data": false, 00:03:51.169 "copy": true, 00:03:51.169 "nvme_iov_md": false 00:03:51.169 }, 00:03:51.169 "memory_domains": [ 00:03:51.169 { 00:03:51.169 "dma_device_id": "system", 00:03:51.169 "dma_device_type": 1 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.169 "dma_device_type": 2 00:03:51.169 } 00:03:51.169 ], 00:03:51.169 "driver_specific": {} 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "name": "Passthru0", 00:03:51.169 "aliases": [ 00:03:51.169 "965ede55-965e-5b52-a556-4a67f1909374" 00:03:51.169 ], 00:03:51.169 "product_name": "passthru", 00:03:51.169 "block_size": 512, 00:03:51.169 "num_blocks": 16384, 00:03:51.169 "uuid": "965ede55-965e-5b52-a556-4a67f1909374", 00:03:51.169 "assigned_rate_limits": { 00:03:51.169 "rw_ios_per_sec": 0, 00:03:51.169 "rw_mbytes_per_sec": 0, 00:03:51.169 "r_mbytes_per_sec": 0, 00:03:51.169 "w_mbytes_per_sec": 0 00:03:51.169 }, 00:03:51.169 "claimed": false, 00:03:51.169 "zoned": false, 00:03:51.169 "supported_io_types": { 00:03:51.169 "read": true, 00:03:51.169 "write": true, 00:03:51.169 "unmap": true, 00:03:51.169 "flush": true, 00:03:51.169 "reset": true, 00:03:51.169 "nvme_admin": false, 00:03:51.169 "nvme_io": false, 00:03:51.169 "nvme_io_md": false, 00:03:51.169 "write_zeroes": true, 00:03:51.169 "zcopy": true, 00:03:51.169 "get_zone_info": false, 00:03:51.169 "zone_management": false, 00:03:51.169 "zone_append": false, 00:03:51.169 "compare": false, 00:03:51.169 "compare_and_write": false, 00:03:51.169 "abort": true, 00:03:51.169 "seek_hole": false, 00:03:51.169 "seek_data": false, 00:03:51.169 "copy": true, 00:03:51.169 "nvme_iov_md": false 00:03:51.169 }, 00:03:51.169 "memory_domains": [ 00:03:51.169 { 00:03:51.169 "dma_device_id": "system", 00:03:51.169 "dma_device_type": 1 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.169 "dma_device_type": 2 00:03:51.169 } 00:03:51.169 ], 00:03:51.169 "driver_specific": { 00:03:51.169 "passthru": { 00:03:51.169 "name": "Passthru0", 00:03:51.169 "base_bdev_name": "Malloc0" 00:03:51.169 } 00:03:51.169 } 00:03:51.169 } 00:03:51.169 ]' 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.169 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.170 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:51.170 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.170 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.170 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.170 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.170 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.170 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.170 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.431 20:27:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.431 00:03:51.431 real 0m0.237s 00:03:51.431 user 0m0.134s 00:03:51.431 sys 0m0.036s 00:03:51.431 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.431 20:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.431 ************************************ 00:03:51.431 END TEST rpc_integrity 00:03:51.432 ************************************ 00:03:51.432 20:27:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:51.432 20:27:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.432 20:27:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.432 20:27:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.432 ************************************ 00:03:51.432 START TEST rpc_plugins 00:03:51.432 ************************************ 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:51.432 { 00:03:51.432 "name": "Malloc1", 00:03:51.432 "aliases": [ 00:03:51.432 "3aeb8889-f896-4481-abd0-d8a18fd86732" 00:03:51.432 ], 00:03:51.432 "product_name": "Malloc disk", 00:03:51.432 "block_size": 4096, 00:03:51.432 "num_blocks": 256, 00:03:51.432 "uuid": "3aeb8889-f896-4481-abd0-d8a18fd86732", 00:03:51.432 "assigned_rate_limits": { 00:03:51.432 "rw_ios_per_sec": 0, 00:03:51.432 "rw_mbytes_per_sec": 0, 00:03:51.432 "r_mbytes_per_sec": 0, 00:03:51.432 "w_mbytes_per_sec": 0 00:03:51.432 }, 00:03:51.432 "claimed": false, 00:03:51.432 "zoned": false, 00:03:51.432 "supported_io_types": { 00:03:51.432 "read": true, 00:03:51.432 "write": true, 00:03:51.432 "unmap": true, 00:03:51.432 "flush": true, 00:03:51.432 "reset": true, 00:03:51.432 "nvme_admin": false, 00:03:51.432 "nvme_io": false, 00:03:51.432 "nvme_io_md": false, 00:03:51.432 "write_zeroes": true, 00:03:51.432 "zcopy": true, 00:03:51.432 "get_zone_info": false, 00:03:51.432 "zone_management": false, 00:03:51.432 "zone_append": false, 00:03:51.432 "compare": false, 00:03:51.432 "compare_and_write": false, 00:03:51.432 "abort": true, 00:03:51.432 "seek_hole": false, 00:03:51.432 "seek_data": false, 00:03:51.432 "copy": true, 00:03:51.432 "nvme_iov_md": false 00:03:51.432 }, 00:03:51.432 "memory_domains": [ 00:03:51.432 { 00:03:51.432 "dma_device_id": "system", 00:03:51.432 "dma_device_type": 1 00:03:51.432 }, 00:03:51.432 { 00:03:51.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.432 "dma_device_type": 2 00:03:51.432 } 00:03:51.432 ], 00:03:51.432 "driver_specific": {} 00:03:51.432 } 00:03:51.432 ]' 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:51.432 20:27:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:51.432 00:03:51.432 real 0m0.114s 00:03:51.432 user 0m0.060s 00:03:51.432 sys 0m0.018s 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.432 ************************************ 00:03:51.432 END TEST rpc_plugins 00:03:51.432 ************************************ 00:03:51.432 20:27:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.432 20:27:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:51.432 20:27:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.432 20:27:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.432 20:27:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.432 ************************************ 00:03:51.432 START TEST rpc_trace_cmd_test 00:03:51.432 ************************************ 00:03:51.432 20:27:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:51.432 20:27:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:51.432 20:27:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:51.432 20:27:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.432 20:27:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.693 20:27:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.693 20:27:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:51.693 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56270", 00:03:51.693 "tpoint_group_mask": "0x8", 00:03:51.693 "iscsi_conn": { 00:03:51.693 "mask": "0x2", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "scsi": { 00:03:51.693 "mask": "0x4", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "bdev": { 00:03:51.693 "mask": "0x8", 00:03:51.693 "tpoint_mask": "0xffffffffffffffff" 00:03:51.693 }, 00:03:51.693 "nvmf_rdma": { 00:03:51.693 "mask": "0x10", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "nvmf_tcp": { 00:03:51.693 "mask": "0x20", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "ftl": { 00:03:51.693 "mask": "0x40", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "blobfs": { 00:03:51.693 "mask": "0x80", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "dsa": { 00:03:51.693 "mask": "0x200", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "thread": { 00:03:51.693 "mask": "0x400", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "nvme_pcie": { 00:03:51.693 "mask": "0x800", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "iaa": { 00:03:51.693 "mask": "0x1000", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "nvme_tcp": { 00:03:51.693 "mask": "0x2000", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "bdev_nvme": { 00:03:51.693 "mask": "0x4000", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "sock": { 00:03:51.693 "mask": "0x8000", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "blob": { 00:03:51.693 "mask": "0x10000", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "bdev_raid": { 00:03:51.693 "mask": "0x20000", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 }, 00:03:51.693 "scheduler": { 00:03:51.693 "mask": "0x40000", 00:03:51.693 "tpoint_mask": "0x0" 00:03:51.693 } 00:03:51.693 }' 00:03:51.693 20:27:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.693 00:03:51.693 real 0m0.172s 00:03:51.693 user 0m0.134s 00:03:51.693 sys 0m0.030s 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.693 20:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.693 ************************************ 00:03:51.693 END TEST rpc_trace_cmd_test 00:03:51.693 ************************************ 00:03:51.693 20:27:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.693 20:27:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.693 20:27:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.693 20:27:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.693 20:27:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.693 20:27:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.693 ************************************ 00:03:51.693 START TEST rpc_daemon_integrity 00:03:51.693 ************************************ 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.693 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.955 { 00:03:51.955 "name": "Malloc2", 00:03:51.955 "aliases": [ 00:03:51.955 "485178ce-b613-440b-a4ec-134de4b50734" 00:03:51.955 ], 00:03:51.955 "product_name": "Malloc disk", 00:03:51.955 "block_size": 512, 00:03:51.955 "num_blocks": 16384, 00:03:51.955 "uuid": "485178ce-b613-440b-a4ec-134de4b50734", 00:03:51.955 "assigned_rate_limits": { 00:03:51.955 "rw_ios_per_sec": 0, 00:03:51.955 "rw_mbytes_per_sec": 0, 00:03:51.955 "r_mbytes_per_sec": 0, 00:03:51.955 "w_mbytes_per_sec": 0 00:03:51.955 }, 00:03:51.955 "claimed": false, 00:03:51.955 "zoned": false, 00:03:51.955 "supported_io_types": { 00:03:51.955 "read": true, 00:03:51.955 "write": true, 00:03:51.955 "unmap": true, 00:03:51.955 "flush": true, 00:03:51.955 "reset": true, 00:03:51.955 "nvme_admin": false, 00:03:51.955 "nvme_io": false, 00:03:51.955 "nvme_io_md": false, 00:03:51.955 "write_zeroes": true, 00:03:51.955 "zcopy": true, 00:03:51.955 "get_zone_info": false, 00:03:51.955 "zone_management": false, 00:03:51.955 "zone_append": false, 00:03:51.955 "compare": false, 00:03:51.955 "compare_and_write": false, 00:03:51.955 "abort": true, 00:03:51.955 "seek_hole": false, 00:03:51.955 "seek_data": false, 00:03:51.955 "copy": true, 00:03:51.955 "nvme_iov_md": false 00:03:51.955 }, 00:03:51.955 "memory_domains": [ 00:03:51.955 { 00:03:51.955 "dma_device_id": "system", 00:03:51.955 "dma_device_type": 1 00:03:51.955 }, 00:03:51.955 { 00:03:51.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.955 "dma_device_type": 2 00:03:51.955 } 00:03:51.955 ], 00:03:51.955 "driver_specific": {} 00:03:51.955 } 00:03:51.955 ]' 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.955 [2024-11-26 20:27:06.310201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:51.955 [2024-11-26 20:27:06.310242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.955 [2024-11-26 20:27:06.310255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa76030 00:03:51.955 [2024-11-26 20:27:06.310261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.955 [2024-11-26 20:27:06.311610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.955 [2024-11-26 20:27:06.311636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.955 Passthru0 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.955 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.955 { 00:03:51.955 "name": "Malloc2", 00:03:51.955 "aliases": [ 00:03:51.955 "485178ce-b613-440b-a4ec-134de4b50734" 00:03:51.955 ], 00:03:51.955 "product_name": "Malloc disk", 00:03:51.955 "block_size": 512, 00:03:51.955 "num_blocks": 16384, 00:03:51.955 "uuid": "485178ce-b613-440b-a4ec-134de4b50734", 00:03:51.955 "assigned_rate_limits": { 00:03:51.955 "rw_ios_per_sec": 0, 00:03:51.955 "rw_mbytes_per_sec": 0, 00:03:51.955 "r_mbytes_per_sec": 0, 00:03:51.955 "w_mbytes_per_sec": 0 00:03:51.955 }, 00:03:51.955 "claimed": true, 00:03:51.955 "claim_type": "exclusive_write", 00:03:51.955 "zoned": false, 00:03:51.955 "supported_io_types": { 00:03:51.955 "read": true, 00:03:51.955 "write": true, 00:03:51.955 "unmap": true, 00:03:51.955 "flush": true, 00:03:51.955 "reset": true, 00:03:51.955 "nvme_admin": false, 00:03:51.955 "nvme_io": false, 00:03:51.955 "nvme_io_md": false, 00:03:51.955 "write_zeroes": true, 00:03:51.955 "zcopy": true, 00:03:51.955 "get_zone_info": false, 00:03:51.955 "zone_management": false, 00:03:51.955 "zone_append": false, 00:03:51.955 "compare": false, 00:03:51.955 "compare_and_write": false, 00:03:51.955 "abort": true, 00:03:51.955 "seek_hole": false, 00:03:51.955 "seek_data": false, 00:03:51.955 "copy": true, 00:03:51.956 "nvme_iov_md": false 00:03:51.956 }, 00:03:51.956 "memory_domains": [ 00:03:51.956 { 00:03:51.956 "dma_device_id": "system", 00:03:51.956 "dma_device_type": 1 00:03:51.956 }, 00:03:51.956 { 00:03:51.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.956 "dma_device_type": 2 00:03:51.956 } 00:03:51.956 ], 00:03:51.956 "driver_specific": {} 00:03:51.956 }, 00:03:51.956 { 00:03:51.956 "name": "Passthru0", 00:03:51.956 "aliases": [ 00:03:51.956 "f1d7ff38-4c7a-5717-bac3-bd526da4f89a" 00:03:51.956 ], 00:03:51.956 "product_name": "passthru", 00:03:51.956 "block_size": 512, 00:03:51.956 "num_blocks": 16384, 00:03:51.956 "uuid": "f1d7ff38-4c7a-5717-bac3-bd526da4f89a", 00:03:51.956 "assigned_rate_limits": { 00:03:51.956 "rw_ios_per_sec": 0, 00:03:51.956 "rw_mbytes_per_sec": 0, 00:03:51.956 "r_mbytes_per_sec": 0, 00:03:51.956 "w_mbytes_per_sec": 0 00:03:51.956 }, 00:03:51.956 "claimed": false, 00:03:51.956 "zoned": false, 00:03:51.956 "supported_io_types": { 00:03:51.956 "read": true, 00:03:51.956 "write": true, 00:03:51.956 "unmap": true, 00:03:51.956 "flush": true, 00:03:51.956 "reset": true, 00:03:51.956 "nvme_admin": false, 00:03:51.956 "nvme_io": false, 00:03:51.956 "nvme_io_md": false, 00:03:51.956 "write_zeroes": true, 00:03:51.956 "zcopy": true, 00:03:51.956 "get_zone_info": false, 00:03:51.956 "zone_management": false, 00:03:51.956 "zone_append": false, 00:03:51.956 "compare": false, 00:03:51.956 "compare_and_write": false, 00:03:51.956 "abort": true, 00:03:51.956 "seek_hole": false, 00:03:51.956 "seek_data": false, 00:03:51.956 "copy": true, 00:03:51.956 "nvme_iov_md": false 00:03:51.956 }, 00:03:51.956 "memory_domains": [ 00:03:51.956 { 00:03:51.956 "dma_device_id": "system", 00:03:51.956 "dma_device_type": 1 00:03:51.956 }, 00:03:51.956 { 00:03:51.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.956 "dma_device_type": 2 00:03:51.956 } 00:03:51.956 ], 00:03:51.956 "driver_specific": { 00:03:51.956 "passthru": { 00:03:51.956 "name": "Passthru0", 00:03:51.956 "base_bdev_name": "Malloc2" 00:03:51.956 } 00:03:51.956 } 00:03:51.956 } 00:03:51.956 ]' 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.956 00:03:51.956 real 0m0.221s 00:03:51.956 user 0m0.123s 00:03:51.956 sys 0m0.032s 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.956 20:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.956 ************************************ 00:03:51.956 END TEST rpc_daemon_integrity 00:03:51.956 ************************************ 00:03:51.956 20:27:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:51.956 20:27:06 rpc -- rpc/rpc.sh@84 -- # killprocess 56270 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 56270 ']' 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@958 -- # kill -0 56270 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@959 -- # uname 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56270 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.956 killing process with pid 56270 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56270' 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@973 -- # kill 56270 00:03:51.956 20:27:06 rpc -- common/autotest_common.sh@978 -- # wait 56270 00:03:52.216 00:03:52.216 real 0m1.766s 00:03:52.216 user 0m2.187s 00:03:52.216 sys 0m0.494s 00:03:52.216 20:27:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.216 20:27:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.216 ************************************ 00:03:52.216 END TEST rpc 00:03:52.216 ************************************ 00:03:52.216 20:27:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:52.217 20:27:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.217 20:27:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.217 20:27:06 -- common/autotest_common.sh@10 -- # set +x 00:03:52.217 ************************************ 00:03:52.217 START TEST skip_rpc 00:03:52.217 ************************************ 00:03:52.217 20:27:06 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:52.477 * Looking for test storage... 00:03:52.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.477 20:27:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.477 --rc genhtml_branch_coverage=1 00:03:52.477 --rc genhtml_function_coverage=1 00:03:52.477 --rc genhtml_legend=1 00:03:52.477 --rc geninfo_all_blocks=1 00:03:52.477 --rc geninfo_unexecuted_blocks=1 00:03:52.477 00:03:52.477 ' 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.477 --rc genhtml_branch_coverage=1 00:03:52.477 --rc genhtml_function_coverage=1 00:03:52.477 --rc genhtml_legend=1 00:03:52.477 --rc geninfo_all_blocks=1 00:03:52.477 --rc geninfo_unexecuted_blocks=1 00:03:52.477 00:03:52.477 ' 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.477 --rc genhtml_branch_coverage=1 00:03:52.477 --rc genhtml_function_coverage=1 00:03:52.477 --rc genhtml_legend=1 00:03:52.477 --rc geninfo_all_blocks=1 00:03:52.477 --rc geninfo_unexecuted_blocks=1 00:03:52.477 00:03:52.477 ' 00:03:52.477 20:27:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.477 --rc genhtml_branch_coverage=1 00:03:52.478 --rc genhtml_function_coverage=1 00:03:52.478 --rc genhtml_legend=1 00:03:52.478 --rc geninfo_all_blocks=1 00:03:52.478 --rc geninfo_unexecuted_blocks=1 00:03:52.478 00:03:52.478 ' 00:03:52.478 20:27:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:52.478 20:27:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:52.478 20:27:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:52.478 20:27:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.478 20:27:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.478 20:27:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.478 ************************************ 00:03:52.478 START TEST skip_rpc 00:03:52.478 ************************************ 00:03:52.478 20:27:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:52.478 20:27:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56463 00:03:52.478 20:27:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.478 20:27:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:52.478 20:27:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.478 [2024-11-26 20:27:06.965933] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:03:52.478 [2024-11-26 20:27:06.966001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56463 ] 00:03:52.739 [2024-11-26 20:27:07.106556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.739 [2024-11-26 20:27:07.141990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.739 [2024-11-26 20:27:07.185095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56463 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56463 ']' 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56463 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56463 00:03:58.092 killing process with pid 56463 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56463' 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56463 00:03:58.092 20:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56463 00:03:58.092 00:03:58.092 real 0m5.353s 00:03:58.092 user 0m5.062s 00:03:58.092 sys 0m0.187s 00:03:58.092 20:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.092 ************************************ 00:03:58.092 END TEST skip_rpc 00:03:58.092 ************************************ 00:03:58.092 20:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.092 20:27:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:58.092 20:27:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.092 20:27:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.092 20:27:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.092 ************************************ 00:03:58.092 START TEST skip_rpc_with_json 00:03:58.092 ************************************ 00:03:58.092 20:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:58.092 20:27:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:58.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.092 20:27:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56545 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56545 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56545 ']' 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.093 20:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.093 [2024-11-26 20:27:12.393505] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:03:58.093 [2024-11-26 20:27:12.393585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56545 ] 00:03:58.093 [2024-11-26 20:27:12.534635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.093 [2024-11-26 20:27:12.591426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.380 [2024-11-26 20:27:12.669917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:03:58.979 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.980 [2024-11-26 20:27:13.297978] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:58.980 request: 00:03:58.980 { 00:03:58.980 "trtype": "tcp", 00:03:58.980 "method": "nvmf_get_transports", 00:03:58.980 "req_id": 1 00:03:58.980 } 00:03:58.980 Got JSON-RPC error response 00:03:58.980 response: 00:03:58.980 { 00:03:58.980 "code": -19, 00:03:58.980 "message": "No such device" 00:03:58.980 } 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.980 [2024-11-26 20:27:13.310081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.980 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:58.980 { 00:03:58.980 "subsystems": [ 00:03:58.980 { 00:03:58.980 "subsystem": "fsdev", 00:03:58.980 "config": [ 00:03:58.980 { 00:03:58.980 "method": "fsdev_set_opts", 00:03:58.980 "params": { 00:03:58.980 "fsdev_io_pool_size": 65535, 00:03:58.980 "fsdev_io_cache_size": 256 00:03:58.980 } 00:03:58.980 } 00:03:58.980 ] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "keyring", 00:03:58.980 "config": [] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "iobuf", 00:03:58.980 "config": [ 00:03:58.980 { 00:03:58.980 "method": "iobuf_set_options", 00:03:58.980 "params": { 00:03:58.980 "small_pool_count": 8192, 00:03:58.980 "large_pool_count": 1024, 00:03:58.980 "small_bufsize": 8192, 00:03:58.980 "large_bufsize": 135168, 00:03:58.980 "enable_numa": false 00:03:58.980 } 00:03:58.980 } 00:03:58.980 ] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "sock", 00:03:58.980 "config": [ 00:03:58.980 { 00:03:58.980 "method": "sock_set_default_impl", 00:03:58.980 "params": { 00:03:58.980 "impl_name": "uring" 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "sock_impl_set_options", 00:03:58.980 "params": { 00:03:58.980 "impl_name": "ssl", 00:03:58.980 "recv_buf_size": 4096, 00:03:58.980 "send_buf_size": 4096, 00:03:58.980 "enable_recv_pipe": true, 00:03:58.980 "enable_quickack": false, 00:03:58.980 "enable_placement_id": 0, 00:03:58.980 "enable_zerocopy_send_server": true, 00:03:58.980 "enable_zerocopy_send_client": false, 00:03:58.980 "zerocopy_threshold": 0, 00:03:58.980 "tls_version": 0, 00:03:58.980 "enable_ktls": false 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "sock_impl_set_options", 00:03:58.980 "params": { 00:03:58.980 "impl_name": "posix", 00:03:58.980 "recv_buf_size": 2097152, 00:03:58.980 "send_buf_size": 2097152, 00:03:58.980 "enable_recv_pipe": true, 00:03:58.980 "enable_quickack": false, 00:03:58.980 "enable_placement_id": 0, 00:03:58.980 "enable_zerocopy_send_server": true, 00:03:58.980 "enable_zerocopy_send_client": false, 00:03:58.980 "zerocopy_threshold": 0, 00:03:58.980 "tls_version": 0, 00:03:58.980 "enable_ktls": false 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "sock_impl_set_options", 00:03:58.980 "params": { 00:03:58.980 "impl_name": "uring", 00:03:58.980 "recv_buf_size": 2097152, 00:03:58.980 "send_buf_size": 2097152, 00:03:58.980 "enable_recv_pipe": true, 00:03:58.980 "enable_quickack": false, 00:03:58.980 "enable_placement_id": 0, 00:03:58.980 "enable_zerocopy_send_server": false, 00:03:58.980 "enable_zerocopy_send_client": false, 00:03:58.980 "zerocopy_threshold": 0, 00:03:58.980 "tls_version": 0, 00:03:58.980 "enable_ktls": false 00:03:58.980 } 00:03:58.980 } 00:03:58.980 ] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "vmd", 00:03:58.980 "config": [] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "accel", 00:03:58.980 "config": [ 00:03:58.980 { 00:03:58.980 "method": "accel_set_options", 00:03:58.980 "params": { 00:03:58.980 "small_cache_size": 128, 00:03:58.980 "large_cache_size": 16, 00:03:58.980 "task_count": 2048, 00:03:58.980 "sequence_count": 2048, 00:03:58.980 "buf_count": 2048 00:03:58.980 } 00:03:58.980 } 00:03:58.980 ] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "bdev", 00:03:58.980 "config": [ 00:03:58.980 { 00:03:58.980 "method": "bdev_set_options", 00:03:58.980 "params": { 00:03:58.980 "bdev_io_pool_size": 65535, 00:03:58.980 "bdev_io_cache_size": 256, 00:03:58.980 "bdev_auto_examine": true, 00:03:58.980 "iobuf_small_cache_size": 128, 00:03:58.980 "iobuf_large_cache_size": 16 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "bdev_raid_set_options", 00:03:58.980 "params": { 00:03:58.980 "process_window_size_kb": 1024, 00:03:58.980 "process_max_bandwidth_mb_sec": 0 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "bdev_iscsi_set_options", 00:03:58.980 "params": { 00:03:58.980 "timeout_sec": 30 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "bdev_nvme_set_options", 00:03:58.980 "params": { 00:03:58.980 "action_on_timeout": "none", 00:03:58.980 "timeout_us": 0, 00:03:58.980 "timeout_admin_us": 0, 00:03:58.980 "keep_alive_timeout_ms": 10000, 00:03:58.980 "arbitration_burst": 0, 00:03:58.980 "low_priority_weight": 0, 00:03:58.980 "medium_priority_weight": 0, 00:03:58.980 "high_priority_weight": 0, 00:03:58.980 "nvme_adminq_poll_period_us": 10000, 00:03:58.980 "nvme_ioq_poll_period_us": 0, 00:03:58.980 "io_queue_requests": 0, 00:03:58.980 "delay_cmd_submit": true, 00:03:58.980 "transport_retry_count": 4, 00:03:58.980 "bdev_retry_count": 3, 00:03:58.980 "transport_ack_timeout": 0, 00:03:58.980 "ctrlr_loss_timeout_sec": 0, 00:03:58.980 "reconnect_delay_sec": 0, 00:03:58.980 "fast_io_fail_timeout_sec": 0, 00:03:58.980 "disable_auto_failback": false, 00:03:58.980 "generate_uuids": false, 00:03:58.980 "transport_tos": 0, 00:03:58.980 "nvme_error_stat": false, 00:03:58.980 "rdma_srq_size": 0, 00:03:58.980 "io_path_stat": false, 00:03:58.980 "allow_accel_sequence": false, 00:03:58.980 "rdma_max_cq_size": 0, 00:03:58.980 "rdma_cm_event_timeout_ms": 0, 00:03:58.980 "dhchap_digests": [ 00:03:58.980 "sha256", 00:03:58.980 "sha384", 00:03:58.980 "sha512" 00:03:58.980 ], 00:03:58.980 "dhchap_dhgroups": [ 00:03:58.980 "null", 00:03:58.980 "ffdhe2048", 00:03:58.980 "ffdhe3072", 00:03:58.980 "ffdhe4096", 00:03:58.980 "ffdhe6144", 00:03:58.980 "ffdhe8192" 00:03:58.980 ] 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "bdev_nvme_set_hotplug", 00:03:58.980 "params": { 00:03:58.980 "period_us": 100000, 00:03:58.980 "enable": false 00:03:58.980 } 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "method": "bdev_wait_for_examine" 00:03:58.980 } 00:03:58.980 ] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "scsi", 00:03:58.980 "config": null 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "scheduler", 00:03:58.980 "config": [ 00:03:58.980 { 00:03:58.980 "method": "framework_set_scheduler", 00:03:58.980 "params": { 00:03:58.980 "name": "static" 00:03:58.980 } 00:03:58.980 } 00:03:58.980 ] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "vhost_scsi", 00:03:58.980 "config": [] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "vhost_blk", 00:03:58.980 "config": [] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "ublk", 00:03:58.980 "config": [] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "nbd", 00:03:58.980 "config": [] 00:03:58.980 }, 00:03:58.980 { 00:03:58.980 "subsystem": "nvmf", 00:03:58.980 "config": [ 00:03:58.980 { 00:03:58.980 "method": "nvmf_set_config", 00:03:58.980 "params": { 00:03:58.980 "discovery_filter": "match_any", 00:03:58.980 "admin_cmd_passthru": { 00:03:58.980 "identify_ctrlr": false 00:03:58.980 }, 00:03:58.980 "dhchap_digests": [ 00:03:58.980 "sha256", 00:03:58.980 "sha384", 00:03:58.980 "sha512" 00:03:58.980 ], 00:03:58.980 "dhchap_dhgroups": [ 00:03:58.980 "null", 00:03:58.980 "ffdhe2048", 00:03:58.980 "ffdhe3072", 00:03:58.981 "ffdhe4096", 00:03:58.981 "ffdhe6144", 00:03:58.981 "ffdhe8192" 00:03:58.981 ] 00:03:58.981 } 00:03:58.981 }, 00:03:58.981 { 00:03:58.981 "method": "nvmf_set_max_subsystems", 00:03:58.981 "params": { 00:03:58.981 "max_subsystems": 1024 00:03:58.981 } 00:03:58.981 }, 00:03:58.981 { 00:03:58.981 "method": "nvmf_set_crdt", 00:03:58.981 "params": { 00:03:58.981 "crdt1": 0, 00:03:58.981 "crdt2": 0, 00:03:58.981 "crdt3": 0 00:03:58.981 } 00:03:58.981 }, 00:03:58.981 { 00:03:58.981 "method": "nvmf_create_transport", 00:03:58.981 "params": { 00:03:58.981 "trtype": "TCP", 00:03:58.981 "max_queue_depth": 128, 00:03:58.981 "max_io_qpairs_per_ctrlr": 127, 00:03:58.981 "in_capsule_data_size": 4096, 00:03:58.981 "max_io_size": 131072, 00:03:58.981 "io_unit_size": 131072, 00:03:58.981 "max_aq_depth": 128, 00:03:58.981 "num_shared_buffers": 511, 00:03:58.981 "buf_cache_size": 4294967295, 00:03:58.981 "dif_insert_or_strip": false, 00:03:58.981 "zcopy": false, 00:03:58.981 "c2h_success": true, 00:03:58.981 "sock_priority": 0, 00:03:58.981 "abort_timeout_sec": 1, 00:03:58.981 "ack_timeout": 0, 00:03:58.981 "data_wr_pool_size": 0 00:03:58.981 } 00:03:58.981 } 00:03:58.981 ] 00:03:58.981 }, 00:03:58.981 { 00:03:58.981 "subsystem": "iscsi", 00:03:58.981 "config": [ 00:03:58.981 { 00:03:58.981 "method": "iscsi_set_options", 00:03:58.981 "params": { 00:03:58.981 "node_base": "iqn.2016-06.io.spdk", 00:03:58.981 "max_sessions": 128, 00:03:58.981 "max_connections_per_session": 2, 00:03:58.981 "max_queue_depth": 64, 00:03:58.981 "default_time2wait": 2, 00:03:58.981 "default_time2retain": 20, 00:03:58.981 "first_burst_length": 8192, 00:03:58.981 "immediate_data": true, 00:03:58.981 "allow_duplicated_isid": false, 00:03:58.981 "error_recovery_level": 0, 00:03:58.981 "nop_timeout": 60, 00:03:58.981 "nop_in_interval": 30, 00:03:58.981 "disable_chap": false, 00:03:58.981 "require_chap": false, 00:03:58.981 "mutual_chap": false, 00:03:58.981 "chap_group": 0, 00:03:58.981 "max_large_datain_per_connection": 64, 00:03:58.981 "max_r2t_per_connection": 4, 00:03:58.981 "pdu_pool_size": 36864, 00:03:58.981 "immediate_data_pool_size": 16384, 00:03:58.981 "data_out_pool_size": 2048 00:03:58.981 } 00:03:58.981 } 00:03:58.981 ] 00:03:58.981 } 00:03:58.981 ] 00:03:58.981 } 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56545 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56545 ']' 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56545 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56545 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.981 killing process with pid 56545 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56545' 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56545 00:03:58.981 20:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56545 00:03:59.554 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56577 00:03:59.554 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:59.554 20:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56577 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56577 ']' 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56577 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56577 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.840 killing process with pid 56577 00:04:04.840 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.841 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56577' 00:04:04.841 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56577 00:04:04.841 20:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56577 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.841 00:04:04.841 real 0m6.692s 00:04:04.841 user 0m6.365s 00:04:04.841 sys 0m0.589s 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.841 ************************************ 00:04:04.841 END TEST skip_rpc_with_json 00:04:04.841 ************************************ 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.841 20:27:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.841 20:27:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.841 20:27:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.841 20:27:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.841 ************************************ 00:04:04.841 START TEST skip_rpc_with_delay 00:04:04.841 ************************************ 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.841 [2024-11-26 20:27:19.139301] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:04.841 00:04:04.841 real 0m0.058s 00:04:04.841 user 0m0.028s 00:04:04.841 sys 0m0.029s 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.841 20:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:04.841 ************************************ 00:04:04.841 END TEST skip_rpc_with_delay 00:04:04.841 ************************************ 00:04:04.841 20:27:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.841 20:27:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.841 20:27:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.841 20:27:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.841 20:27:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.841 20:27:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.841 ************************************ 00:04:04.841 START TEST exit_on_failed_rpc_init 00:04:04.841 ************************************ 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56681 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56681 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 56681 ']' 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.841 20:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.841 [2024-11-26 20:27:19.257763] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:04.841 [2024-11-26 20:27:19.257827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56681 ] 00:04:05.099 [2024-11-26 20:27:19.395514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.099 [2024-11-26 20:27:19.431960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.099 [2024-11-26 20:27:19.476065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:05.665 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.665 [2024-11-26 20:27:20.173302] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:05.665 [2024-11-26 20:27:20.173368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56699 ] 00:04:05.922 [2024-11-26 20:27:20.314557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.923 [2024-11-26 20:27:20.351475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.923 [2024-11-26 20:27:20.351538] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:05.923 [2024-11-26 20:27:20.351546] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:05.923 [2024-11-26 20:27:20.351552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56681 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 56681 ']' 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 56681 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56681 00:04:05.923 killing process with pid 56681 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56681' 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 56681 00:04:05.923 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 56681 00:04:06.181 00:04:06.181 real 0m1.421s 00:04:06.181 user 0m1.631s 00:04:06.181 sys 0m0.258s 00:04:06.181 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.181 ************************************ 00:04:06.181 END TEST exit_on_failed_rpc_init 00:04:06.181 ************************************ 00:04:06.181 20:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.181 20:27:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.181 00:04:06.181 real 0m13.918s 00:04:06.181 user 0m13.232s 00:04:06.181 sys 0m1.238s 00:04:06.181 ************************************ 00:04:06.181 END TEST skip_rpc 00:04:06.181 ************************************ 00:04:06.181 20:27:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.181 20:27:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.181 20:27:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:06.181 20:27:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.181 20:27:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.181 20:27:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.471 ************************************ 00:04:06.471 START TEST rpc_client 00:04:06.471 ************************************ 00:04:06.471 20:27:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:06.471 * Looking for test storage... 00:04:06.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:06.471 20:27:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.471 20:27:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.471 20:27:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.471 20:27:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.471 20:27:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.471 20:27:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.471 20:27:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.471 20:27:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.471 20:27:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.471 20:27:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.472 20:27:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:06.472 20:27:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.472 20:27:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.472 --rc genhtml_branch_coverage=1 00:04:06.472 --rc genhtml_function_coverage=1 00:04:06.472 --rc genhtml_legend=1 00:04:06.472 --rc geninfo_all_blocks=1 00:04:06.472 --rc geninfo_unexecuted_blocks=1 00:04:06.472 00:04:06.472 ' 00:04:06.472 20:27:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.472 --rc genhtml_branch_coverage=1 00:04:06.472 --rc genhtml_function_coverage=1 00:04:06.472 --rc genhtml_legend=1 00:04:06.472 --rc geninfo_all_blocks=1 00:04:06.472 --rc geninfo_unexecuted_blocks=1 00:04:06.472 00:04:06.472 ' 00:04:06.472 20:27:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.472 --rc genhtml_branch_coverage=1 00:04:06.472 --rc genhtml_function_coverage=1 00:04:06.472 --rc genhtml_legend=1 00:04:06.472 --rc geninfo_all_blocks=1 00:04:06.472 --rc geninfo_unexecuted_blocks=1 00:04:06.472 00:04:06.472 ' 00:04:06.472 20:27:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.472 --rc genhtml_branch_coverage=1 00:04:06.472 --rc genhtml_function_coverage=1 00:04:06.472 --rc genhtml_legend=1 00:04:06.472 --rc geninfo_all_blocks=1 00:04:06.472 --rc geninfo_unexecuted_blocks=1 00:04:06.472 00:04:06.472 ' 00:04:06.472 20:27:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:06.472 OK 00:04:06.472 20:27:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:06.472 ************************************ 00:04:06.472 END TEST rpc_client 00:04:06.472 ************************************ 00:04:06.472 00:04:06.472 real 0m0.164s 00:04:06.472 user 0m0.100s 00:04:06.472 sys 0m0.069s 00:04:06.472 20:27:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.472 20:27:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:06.472 20:27:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:06.472 20:27:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.472 20:27:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.472 20:27:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.472 ************************************ 00:04:06.472 START TEST json_config 00:04:06.472 ************************************ 00:04:06.472 20:27:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:06.778 20:27:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.778 20:27:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.778 20:27:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.778 20:27:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.778 20:27:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.778 20:27:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.778 20:27:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.778 20:27:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.778 20:27:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.778 20:27:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.778 20:27:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.778 20:27:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.778 20:27:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.778 20:27:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.778 20:27:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.778 20:27:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:06.778 20:27:21 json_config -- scripts/common.sh@345 -- # : 1 00:04:06.778 20:27:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.778 20:27:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.778 20:27:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:06.778 20:27:21 json_config -- scripts/common.sh@353 -- # local d=1 00:04:06.778 20:27:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.778 20:27:21 json_config -- scripts/common.sh@355 -- # echo 1 00:04:06.778 20:27:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.779 20:27:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:06.779 20:27:21 json_config -- scripts/common.sh@353 -- # local d=2 00:04:06.779 20:27:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.779 20:27:21 json_config -- scripts/common.sh@355 -- # echo 2 00:04:06.779 20:27:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.779 20:27:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.779 20:27:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.779 20:27:21 json_config -- scripts/common.sh@368 -- # return 0 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.779 --rc genhtml_branch_coverage=1 00:04:06.779 --rc genhtml_function_coverage=1 00:04:06.779 --rc genhtml_legend=1 00:04:06.779 --rc geninfo_all_blocks=1 00:04:06.779 --rc geninfo_unexecuted_blocks=1 00:04:06.779 00:04:06.779 ' 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.779 --rc genhtml_branch_coverage=1 00:04:06.779 --rc genhtml_function_coverage=1 00:04:06.779 --rc genhtml_legend=1 00:04:06.779 --rc geninfo_all_blocks=1 00:04:06.779 --rc geninfo_unexecuted_blocks=1 00:04:06.779 00:04:06.779 ' 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.779 --rc genhtml_branch_coverage=1 00:04:06.779 --rc genhtml_function_coverage=1 00:04:06.779 --rc genhtml_legend=1 00:04:06.779 --rc geninfo_all_blocks=1 00:04:06.779 --rc geninfo_unexecuted_blocks=1 00:04:06.779 00:04:06.779 ' 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.779 --rc genhtml_branch_coverage=1 00:04:06.779 --rc genhtml_function_coverage=1 00:04:06.779 --rc genhtml_legend=1 00:04:06.779 --rc geninfo_all_blocks=1 00:04:06.779 --rc geninfo_unexecuted_blocks=1 00:04:06.779 00:04:06.779 ' 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:06.779 20:27:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.779 20:27:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.779 20:27:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.779 20:27:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.779 20:27:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.779 20:27:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.779 20:27:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.779 20:27:21 json_config -- paths/export.sh@5 -- # export PATH 00:04:06.779 20:27:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@51 -- # : 0 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.779 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.779 20:27:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.779 INFO: JSON configuration test init 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.779 20:27:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:06.779 20:27:21 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.779 20:27:21 json_config -- json_config/common.sh@10 -- # shift 00:04:06.779 Waiting for target to run... 00:04:06.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.779 20:27:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.779 20:27:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.779 20:27:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.779 20:27:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.779 20:27:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.779 20:27:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=56833 00:04:06.779 20:27:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.779 20:27:21 json_config -- json_config/common.sh@25 -- # waitforlisten 56833 /var/tmp/spdk_tgt.sock 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 56833 ']' 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.779 20:27:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.779 20:27:21 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:06.779 [2024-11-26 20:27:21.166096] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:06.780 [2024-11-26 20:27:21.166305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56833 ] 00:04:07.039 [2024-11-26 20:27:21.459532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.039 [2024-11-26 20:27:21.488291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.605 20:27:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.605 20:27:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:07.605 20:27:22 json_config -- json_config/common.sh@26 -- # echo '' 00:04:07.605 00:04:07.605 20:27:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:07.605 20:27:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:07.605 20:27:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.605 20:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.605 20:27:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:07.605 20:27:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:07.605 20:27:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.605 20:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.605 20:27:22 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:07.605 20:27:22 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:07.605 20:27:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:07.864 [2024-11-26 20:27:22.299700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:08.123 20:27:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:08.123 20:27:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:08.123 20:27:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.123 20:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.123 20:27:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:08.123 20:27:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:08.124 20:27:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:08.124 20:27:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:08.124 20:27:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:08.124 20:27:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:08.124 20:27:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:08.124 20:27:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@54 -- # sort 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:08.381 20:27:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.381 20:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:08.381 20:27:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:08.382 20:27:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:08.382 20:27:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:08.382 20:27:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.382 20:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.382 20:27:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:08.382 20:27:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:08.382 20:27:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:08.382 20:27:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.382 20:27:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.639 MallocForNvmf0 00:04:08.639 20:27:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:08.639 20:27:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:08.639 MallocForNvmf1 00:04:08.639 20:27:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:08.639 20:27:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:08.895 [2024-11-26 20:27:23.392727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.895 20:27:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.895 20:27:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.151 20:27:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.151 20:27:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.408 20:27:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:09.408 20:27:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:09.665 20:27:24 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:09.665 20:27:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:09.665 [2024-11-26 20:27:24.197076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:09.665 20:27:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:09.665 20:27:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.665 20:27:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.923 20:27:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:09.923 20:27:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.923 20:27:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.923 20:27:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:09.923 20:27:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:09.923 20:27:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.181 MallocBdevForConfigChangeCheck 00:04:10.181 20:27:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:10.181 20:27:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.182 20:27:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.182 20:27:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:10.182 20:27:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.440 INFO: shutting down applications... 00:04:10.440 20:27:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:10.440 20:27:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:10.440 20:27:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:10.440 20:27:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:10.440 20:27:24 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:10.698 Calling clear_iscsi_subsystem 00:04:10.698 Calling clear_nvmf_subsystem 00:04:10.698 Calling clear_nbd_subsystem 00:04:10.698 Calling clear_ublk_subsystem 00:04:10.698 Calling clear_vhost_blk_subsystem 00:04:10.698 Calling clear_vhost_scsi_subsystem 00:04:10.698 Calling clear_bdev_subsystem 00:04:10.698 20:27:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:10.698 20:27:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:10.698 20:27:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:10.698 20:27:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.698 20:27:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:10.698 20:27:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:11.264 20:27:25 json_config -- json_config/json_config.sh@352 -- # break 00:04:11.264 20:27:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:11.264 20:27:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:11.264 20:27:25 json_config -- json_config/common.sh@31 -- # local app=target 00:04:11.264 20:27:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.264 20:27:25 json_config -- json_config/common.sh@35 -- # [[ -n 56833 ]] 00:04:11.264 20:27:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 56833 00:04:11.264 20:27:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.264 20:27:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.264 20:27:25 json_config -- json_config/common.sh@41 -- # kill -0 56833 00:04:11.265 20:27:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.523 20:27:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.523 20:27:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.523 20:27:26 json_config -- json_config/common.sh@41 -- # kill -0 56833 00:04:11.523 20:27:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.523 20:27:26 json_config -- json_config/common.sh@43 -- # break 00:04:11.523 20:27:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.523 SPDK target shutdown done 00:04:11.523 INFO: relaunching applications... 00:04:11.523 20:27:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.523 20:27:26 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:11.523 20:27:26 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.523 20:27:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.523 20:27:26 json_config -- json_config/common.sh@10 -- # shift 00:04:11.523 20:27:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.523 20:27:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.523 20:27:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.523 20:27:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.523 20:27:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.523 20:27:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57018 00:04:11.523 20:27:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.523 20:27:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.523 Waiting for target to run... 00:04:11.523 20:27:26 json_config -- json_config/common.sh@25 -- # waitforlisten 57018 /var/tmp/spdk_tgt.sock 00:04:11.523 20:27:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 57018 ']' 00:04:11.523 20:27:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.523 20:27:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.523 20:27:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.523 20:27:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.523 20:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.781 [2024-11-26 20:27:26.096911] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:11.781 [2024-11-26 20:27:26.097102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57018 ] 00:04:12.044 [2024-11-26 20:27:26.394540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.044 [2024-11-26 20:27:26.423742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.044 [2024-11-26 20:27:26.560977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:12.312 [2024-11-26 20:27:26.769861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.312 [2024-11-26 20:27:26.801891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:12.570 20:27:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.570 20:27:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:12.570 00:04:12.570 INFO: Checking if target configuration is the same... 00:04:12.570 20:27:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:12.570 20:27:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:12.570 20:27:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:12.570 20:27:27 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:12.570 20:27:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:12.570 20:27:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.570 + '[' 2 -ne 2 ']' 00:04:12.570 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:12.570 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:12.570 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:12.570 +++ basename /dev/fd/62 00:04:12.570 ++ mktemp /tmp/62.XXX 00:04:12.570 + tmp_file_1=/tmp/62.QTu 00:04:12.570 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:12.570 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:12.570 + tmp_file_2=/tmp/spdk_tgt_config.json.hqn 00:04:12.570 + ret=0 00:04:12.570 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:12.828 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:13.085 + diff -u /tmp/62.QTu /tmp/spdk_tgt_config.json.hqn 00:04:13.085 INFO: JSON config files are the same 00:04:13.085 + echo 'INFO: JSON config files are the same' 00:04:13.085 + rm /tmp/62.QTu /tmp/spdk_tgt_config.json.hqn 00:04:13.085 + exit 0 00:04:13.085 INFO: changing configuration and checking if this can be detected... 00:04:13.085 20:27:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:13.085 20:27:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:13.085 20:27:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:13.085 20:27:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:13.085 20:27:27 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:13.085 20:27:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:13.085 20:27:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.085 + '[' 2 -ne 2 ']' 00:04:13.085 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:13.085 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:13.085 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:13.085 +++ basename /dev/fd/62 00:04:13.085 ++ mktemp /tmp/62.XXX 00:04:13.085 + tmp_file_1=/tmp/62.k1T 00:04:13.085 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:13.085 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:13.085 + tmp_file_2=/tmp/spdk_tgt_config.json.ihv 00:04:13.085 + ret=0 00:04:13.085 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:13.648 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:13.648 + diff -u /tmp/62.k1T /tmp/spdk_tgt_config.json.ihv 00:04:13.648 + ret=1 00:04:13.648 + echo '=== Start of file: /tmp/62.k1T ===' 00:04:13.648 + cat /tmp/62.k1T 00:04:13.648 + echo '=== End of file: /tmp/62.k1T ===' 00:04:13.648 + echo '' 00:04:13.648 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ihv ===' 00:04:13.648 + cat /tmp/spdk_tgt_config.json.ihv 00:04:13.648 + echo '=== End of file: /tmp/spdk_tgt_config.json.ihv ===' 00:04:13.648 + echo '' 00:04:13.648 + rm /tmp/62.k1T /tmp/spdk_tgt_config.json.ihv 00:04:13.648 + exit 1 00:04:13.648 INFO: configuration change detected. 00:04:13.648 20:27:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:13.648 20:27:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:13.648 20:27:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:13.648 20:27:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.648 20:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 57018 ]] 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.649 20:27:28 json_config -- json_config/json_config.sh@330 -- # killprocess 57018 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@954 -- # '[' -z 57018 ']' 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@958 -- # kill -0 57018 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@959 -- # uname 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57018 00:04:13.649 killing process with pid 57018 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57018' 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@973 -- # kill 57018 00:04:13.649 20:27:28 json_config -- common/autotest_common.sh@978 -- # wait 57018 00:04:13.905 20:27:28 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:13.905 20:27:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:13.905 20:27:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.905 20:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.905 INFO: Success 00:04:13.905 20:27:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:13.905 20:27:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:13.905 ************************************ 00:04:13.905 END TEST json_config 00:04:13.905 ************************************ 00:04:13.905 00:04:13.905 real 0m7.322s 00:04:13.905 user 0m10.191s 00:04:13.905 sys 0m1.184s 00:04:13.905 20:27:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.905 20:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.905 20:27:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:13.905 20:27:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.905 20:27:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.905 20:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:13.905 ************************************ 00:04:13.905 START TEST json_config_extra_key 00:04:13.905 ************************************ 00:04:13.905 20:27:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:13.905 20:27:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:13.905 20:27:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.905 20:27:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.162 20:27:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.162 20:27:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:14.162 20:27:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.162 20:27:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.162 --rc genhtml_branch_coverage=1 00:04:14.162 --rc genhtml_function_coverage=1 00:04:14.162 --rc genhtml_legend=1 00:04:14.162 --rc geninfo_all_blocks=1 00:04:14.162 --rc geninfo_unexecuted_blocks=1 00:04:14.162 00:04:14.162 ' 00:04:14.162 20:27:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.162 --rc genhtml_branch_coverage=1 00:04:14.162 --rc genhtml_function_coverage=1 00:04:14.162 --rc genhtml_legend=1 00:04:14.162 --rc geninfo_all_blocks=1 00:04:14.162 --rc geninfo_unexecuted_blocks=1 00:04:14.162 00:04:14.162 ' 00:04:14.162 20:27:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.162 --rc genhtml_branch_coverage=1 00:04:14.162 --rc genhtml_function_coverage=1 00:04:14.162 --rc genhtml_legend=1 00:04:14.162 --rc geninfo_all_blocks=1 00:04:14.162 --rc geninfo_unexecuted_blocks=1 00:04:14.162 00:04:14.162 ' 00:04:14.162 20:27:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.162 --rc genhtml_branch_coverage=1 00:04:14.162 --rc genhtml_function_coverage=1 00:04:14.162 --rc genhtml_legend=1 00:04:14.162 --rc geninfo_all_blocks=1 00:04:14.162 --rc geninfo_unexecuted_blocks=1 00:04:14.162 00:04:14.162 ' 00:04:14.162 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:14.162 20:27:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:14.162 20:27:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.162 20:27:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.162 20:27:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.162 20:27:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.162 20:27:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.162 20:27:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:14.163 20:27:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:14.163 20:27:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.163 20:27:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.163 20:27:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.163 20:27:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.163 20:27:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.163 20:27:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.163 20:27:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:14.163 20:27:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:14.163 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:14.163 20:27:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.163 INFO: launching applications... 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:14.163 20:27:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57166 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.163 Waiting for target to run... 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57166 /var/tmp/spdk_tgt.sock 00:04:14.163 20:27:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57166 ']' 00:04:14.163 20:27:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.163 20:27:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:14.163 20:27:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.163 20:27:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.163 20:27:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.163 20:27:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.163 [2024-11-26 20:27:28.557333] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:14.163 [2024-11-26 20:27:28.557837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57166 ] 00:04:14.420 [2024-11-26 20:27:28.859828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.420 [2024-11-26 20:27:28.893946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.420 [2024-11-26 20:27:28.925635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:14.983 00:04:14.983 INFO: shutting down applications... 00:04:14.983 20:27:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.983 20:27:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:14.983 20:27:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:14.983 20:27:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57166 ]] 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57166 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57166 00:04:14.983 20:27:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.548 20:27:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.548 20:27:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.548 20:27:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57166 00:04:15.548 20:27:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.548 20:27:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:15.548 20:27:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.548 SPDK target shutdown done 00:04:15.548 20:27:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.548 20:27:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:15.548 Success 00:04:15.548 00:04:15.548 real 0m1.632s 00:04:15.548 user 0m1.374s 00:04:15.548 sys 0m0.292s 00:04:15.548 20:27:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.548 20:27:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:15.548 ************************************ 00:04:15.548 END TEST json_config_extra_key 00:04:15.548 ************************************ 00:04:15.548 20:27:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.548 20:27:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.548 20:27:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.548 20:27:30 -- common/autotest_common.sh@10 -- # set +x 00:04:15.548 ************************************ 00:04:15.548 START TEST alias_rpc 00:04:15.548 ************************************ 00:04:15.548 20:27:30 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.805 * Looking for test storage... 00:04:15.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:15.805 20:27:30 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.805 20:27:30 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.805 20:27:30 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.805 20:27:30 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.805 20:27:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.806 20:27:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.806 --rc genhtml_branch_coverage=1 00:04:15.806 --rc genhtml_function_coverage=1 00:04:15.806 --rc genhtml_legend=1 00:04:15.806 --rc geninfo_all_blocks=1 00:04:15.806 --rc geninfo_unexecuted_blocks=1 00:04:15.806 00:04:15.806 ' 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.806 --rc genhtml_branch_coverage=1 00:04:15.806 --rc genhtml_function_coverage=1 00:04:15.806 --rc genhtml_legend=1 00:04:15.806 --rc geninfo_all_blocks=1 00:04:15.806 --rc geninfo_unexecuted_blocks=1 00:04:15.806 00:04:15.806 ' 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.806 --rc genhtml_branch_coverage=1 00:04:15.806 --rc genhtml_function_coverage=1 00:04:15.806 --rc genhtml_legend=1 00:04:15.806 --rc geninfo_all_blocks=1 00:04:15.806 --rc geninfo_unexecuted_blocks=1 00:04:15.806 00:04:15.806 ' 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.806 --rc genhtml_branch_coverage=1 00:04:15.806 --rc genhtml_function_coverage=1 00:04:15.806 --rc genhtml_legend=1 00:04:15.806 --rc geninfo_all_blocks=1 00:04:15.806 --rc geninfo_unexecuted_blocks=1 00:04:15.806 00:04:15.806 ' 00:04:15.806 20:27:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:15.806 20:27:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57240 00:04:15.806 20:27:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57240 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57240 ']' 00:04:15.806 20:27:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.806 20:27:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.806 [2024-11-26 20:27:30.247190] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:15.806 [2024-11-26 20:27:30.247266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57240 ] 00:04:16.063 [2024-11-26 20:27:30.398022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.063 [2024-11-26 20:27:30.434926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.063 [2024-11-26 20:27:30.481896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:16.629 20:27:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.629 20:27:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:16.629 20:27:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:16.887 20:27:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57240 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57240 ']' 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57240 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57240 00:04:16.887 killing process with pid 57240 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57240' 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 57240 00:04:16.887 20:27:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 57240 00:04:17.144 ************************************ 00:04:17.144 END TEST alias_rpc 00:04:17.144 ************************************ 00:04:17.144 00:04:17.144 real 0m1.522s 00:04:17.144 user 0m1.714s 00:04:17.144 sys 0m0.316s 00:04:17.144 20:27:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.144 20:27:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.144 20:27:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:17.144 20:27:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.144 20:27:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.144 20:27:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.144 20:27:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.144 ************************************ 00:04:17.144 START TEST spdkcli_tcp 00:04:17.144 ************************************ 00:04:17.144 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.403 * Looking for test storage... 00:04:17.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.403 20:27:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.403 --rc genhtml_branch_coverage=1 00:04:17.403 --rc genhtml_function_coverage=1 00:04:17.403 --rc genhtml_legend=1 00:04:17.403 --rc geninfo_all_blocks=1 00:04:17.403 --rc geninfo_unexecuted_blocks=1 00:04:17.403 00:04:17.403 ' 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.403 --rc genhtml_branch_coverage=1 00:04:17.403 --rc genhtml_function_coverage=1 00:04:17.403 --rc genhtml_legend=1 00:04:17.403 --rc geninfo_all_blocks=1 00:04:17.403 --rc geninfo_unexecuted_blocks=1 00:04:17.403 00:04:17.403 ' 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.403 --rc genhtml_branch_coverage=1 00:04:17.403 --rc genhtml_function_coverage=1 00:04:17.403 --rc genhtml_legend=1 00:04:17.403 --rc geninfo_all_blocks=1 00:04:17.403 --rc geninfo_unexecuted_blocks=1 00:04:17.403 00:04:17.403 ' 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.403 --rc genhtml_branch_coverage=1 00:04:17.403 --rc genhtml_function_coverage=1 00:04:17.403 --rc genhtml_legend=1 00:04:17.403 --rc geninfo_all_blocks=1 00:04:17.403 --rc geninfo_unexecuted_blocks=1 00:04:17.403 00:04:17.403 ' 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.403 20:27:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57319 00:04:17.403 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57319 00:04:17.404 20:27:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:17.404 20:27:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57319 ']' 00:04:17.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.404 20:27:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.404 20:27:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.404 20:27:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.404 20:27:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.404 20:27:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.404 [2024-11-26 20:27:31.839336] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:17.404 [2024-11-26 20:27:31.839506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57319 ] 00:04:17.662 [2024-11-26 20:27:31.973184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.662 [2024-11-26 20:27:32.012298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.662 [2024-11-26 20:27:32.012329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.662 [2024-11-26 20:27:32.059954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:18.229 20:27:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.229 20:27:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:18.229 20:27:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57335 00:04:18.229 20:27:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:18.229 20:27:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:18.488 [ 00:04:18.488 "bdev_malloc_delete", 00:04:18.488 "bdev_malloc_create", 00:04:18.488 "bdev_null_resize", 00:04:18.488 "bdev_null_delete", 00:04:18.488 "bdev_null_create", 00:04:18.488 "bdev_nvme_cuse_unregister", 00:04:18.488 "bdev_nvme_cuse_register", 00:04:18.488 "bdev_opal_new_user", 00:04:18.488 "bdev_opal_set_lock_state", 00:04:18.488 "bdev_opal_delete", 00:04:18.488 "bdev_opal_get_info", 00:04:18.488 "bdev_opal_create", 00:04:18.488 "bdev_nvme_opal_revert", 00:04:18.488 "bdev_nvme_opal_init", 00:04:18.488 "bdev_nvme_send_cmd", 00:04:18.488 "bdev_nvme_set_keys", 00:04:18.488 "bdev_nvme_get_path_iostat", 00:04:18.488 "bdev_nvme_get_mdns_discovery_info", 00:04:18.488 "bdev_nvme_stop_mdns_discovery", 00:04:18.488 "bdev_nvme_start_mdns_discovery", 00:04:18.488 "bdev_nvme_set_multipath_policy", 00:04:18.488 "bdev_nvme_set_preferred_path", 00:04:18.488 "bdev_nvme_get_io_paths", 00:04:18.488 "bdev_nvme_remove_error_injection", 00:04:18.488 "bdev_nvme_add_error_injection", 00:04:18.488 "bdev_nvme_get_discovery_info", 00:04:18.488 "bdev_nvme_stop_discovery", 00:04:18.488 "bdev_nvme_start_discovery", 00:04:18.488 "bdev_nvme_get_controller_health_info", 00:04:18.488 "bdev_nvme_disable_controller", 00:04:18.488 "bdev_nvme_enable_controller", 00:04:18.488 "bdev_nvme_reset_controller", 00:04:18.488 "bdev_nvme_get_transport_statistics", 00:04:18.488 "bdev_nvme_apply_firmware", 00:04:18.488 "bdev_nvme_detach_controller", 00:04:18.488 "bdev_nvme_get_controllers", 00:04:18.488 "bdev_nvme_attach_controller", 00:04:18.488 "bdev_nvme_set_hotplug", 00:04:18.488 "bdev_nvme_set_options", 00:04:18.488 "bdev_passthru_delete", 00:04:18.488 "bdev_passthru_create", 00:04:18.488 "bdev_lvol_set_parent_bdev", 00:04:18.488 "bdev_lvol_set_parent", 00:04:18.488 "bdev_lvol_check_shallow_copy", 00:04:18.488 "bdev_lvol_start_shallow_copy", 00:04:18.488 "bdev_lvol_grow_lvstore", 00:04:18.488 "bdev_lvol_get_lvols", 00:04:18.488 "bdev_lvol_get_lvstores", 00:04:18.488 "bdev_lvol_delete", 00:04:18.488 "bdev_lvol_set_read_only", 00:04:18.488 "bdev_lvol_resize", 00:04:18.488 "bdev_lvol_decouple_parent", 00:04:18.488 "bdev_lvol_inflate", 00:04:18.488 "bdev_lvol_rename", 00:04:18.488 "bdev_lvol_clone_bdev", 00:04:18.488 "bdev_lvol_clone", 00:04:18.488 "bdev_lvol_snapshot", 00:04:18.488 "bdev_lvol_create", 00:04:18.488 "bdev_lvol_delete_lvstore", 00:04:18.488 "bdev_lvol_rename_lvstore", 00:04:18.488 "bdev_lvol_create_lvstore", 00:04:18.488 "bdev_raid_set_options", 00:04:18.488 "bdev_raid_remove_base_bdev", 00:04:18.488 "bdev_raid_add_base_bdev", 00:04:18.488 "bdev_raid_delete", 00:04:18.488 "bdev_raid_create", 00:04:18.488 "bdev_raid_get_bdevs", 00:04:18.488 "bdev_error_inject_error", 00:04:18.488 "bdev_error_delete", 00:04:18.488 "bdev_error_create", 00:04:18.488 "bdev_split_delete", 00:04:18.488 "bdev_split_create", 00:04:18.488 "bdev_delay_delete", 00:04:18.488 "bdev_delay_create", 00:04:18.488 "bdev_delay_update_latency", 00:04:18.488 "bdev_zone_block_delete", 00:04:18.488 "bdev_zone_block_create", 00:04:18.488 "blobfs_create", 00:04:18.488 "blobfs_detect", 00:04:18.488 "blobfs_set_cache_size", 00:04:18.488 "bdev_aio_delete", 00:04:18.488 "bdev_aio_rescan", 00:04:18.488 "bdev_aio_create", 00:04:18.488 "bdev_ftl_set_property", 00:04:18.488 "bdev_ftl_get_properties", 00:04:18.488 "bdev_ftl_get_stats", 00:04:18.488 "bdev_ftl_unmap", 00:04:18.488 "bdev_ftl_unload", 00:04:18.488 "bdev_ftl_delete", 00:04:18.488 "bdev_ftl_load", 00:04:18.488 "bdev_ftl_create", 00:04:18.488 "bdev_virtio_attach_controller", 00:04:18.488 "bdev_virtio_scsi_get_devices", 00:04:18.488 "bdev_virtio_detach_controller", 00:04:18.488 "bdev_virtio_blk_set_hotplug", 00:04:18.488 "bdev_iscsi_delete", 00:04:18.488 "bdev_iscsi_create", 00:04:18.488 "bdev_iscsi_set_options", 00:04:18.488 "bdev_uring_delete", 00:04:18.488 "bdev_uring_rescan", 00:04:18.488 "bdev_uring_create", 00:04:18.488 "accel_error_inject_error", 00:04:18.488 "ioat_scan_accel_module", 00:04:18.488 "dsa_scan_accel_module", 00:04:18.488 "iaa_scan_accel_module", 00:04:18.488 "keyring_file_remove_key", 00:04:18.488 "keyring_file_add_key", 00:04:18.488 "keyring_linux_set_options", 00:04:18.488 "fsdev_aio_delete", 00:04:18.488 "fsdev_aio_create", 00:04:18.488 "iscsi_get_histogram", 00:04:18.488 "iscsi_enable_histogram", 00:04:18.488 "iscsi_set_options", 00:04:18.489 "iscsi_get_auth_groups", 00:04:18.489 "iscsi_auth_group_remove_secret", 00:04:18.489 "iscsi_auth_group_add_secret", 00:04:18.489 "iscsi_delete_auth_group", 00:04:18.489 "iscsi_create_auth_group", 00:04:18.489 "iscsi_set_discovery_auth", 00:04:18.489 "iscsi_get_options", 00:04:18.489 "iscsi_target_node_request_logout", 00:04:18.489 "iscsi_target_node_set_redirect", 00:04:18.489 "iscsi_target_node_set_auth", 00:04:18.489 "iscsi_target_node_add_lun", 00:04:18.489 "iscsi_get_stats", 00:04:18.489 "iscsi_get_connections", 00:04:18.489 "iscsi_portal_group_set_auth", 00:04:18.489 "iscsi_start_portal_group", 00:04:18.489 "iscsi_delete_portal_group", 00:04:18.489 "iscsi_create_portal_group", 00:04:18.489 "iscsi_get_portal_groups", 00:04:18.489 "iscsi_delete_target_node", 00:04:18.489 "iscsi_target_node_remove_pg_ig_maps", 00:04:18.489 "iscsi_target_node_add_pg_ig_maps", 00:04:18.489 "iscsi_create_target_node", 00:04:18.489 "iscsi_get_target_nodes", 00:04:18.489 "iscsi_delete_initiator_group", 00:04:18.489 "iscsi_initiator_group_remove_initiators", 00:04:18.489 "iscsi_initiator_group_add_initiators", 00:04:18.489 "iscsi_create_initiator_group", 00:04:18.489 "iscsi_get_initiator_groups", 00:04:18.489 "nvmf_set_crdt", 00:04:18.489 "nvmf_set_config", 00:04:18.489 "nvmf_set_max_subsystems", 00:04:18.489 "nvmf_stop_mdns_prr", 00:04:18.489 "nvmf_publish_mdns_prr", 00:04:18.489 "nvmf_subsystem_get_listeners", 00:04:18.489 "nvmf_subsystem_get_qpairs", 00:04:18.489 "nvmf_subsystem_get_controllers", 00:04:18.489 "nvmf_get_stats", 00:04:18.489 "nvmf_get_transports", 00:04:18.489 "nvmf_create_transport", 00:04:18.489 "nvmf_get_targets", 00:04:18.489 "nvmf_delete_target", 00:04:18.489 "nvmf_create_target", 00:04:18.489 "nvmf_subsystem_allow_any_host", 00:04:18.489 "nvmf_subsystem_set_keys", 00:04:18.489 "nvmf_subsystem_remove_host", 00:04:18.489 "nvmf_subsystem_add_host", 00:04:18.489 "nvmf_ns_remove_host", 00:04:18.489 "nvmf_ns_add_host", 00:04:18.489 "nvmf_subsystem_remove_ns", 00:04:18.489 "nvmf_subsystem_set_ns_ana_group", 00:04:18.489 "nvmf_subsystem_add_ns", 00:04:18.489 "nvmf_subsystem_listener_set_ana_state", 00:04:18.489 "nvmf_discovery_get_referrals", 00:04:18.489 "nvmf_discovery_remove_referral", 00:04:18.489 "nvmf_discovery_add_referral", 00:04:18.489 "nvmf_subsystem_remove_listener", 00:04:18.489 "nvmf_subsystem_add_listener", 00:04:18.489 "nvmf_delete_subsystem", 00:04:18.489 "nvmf_create_subsystem", 00:04:18.489 "nvmf_get_subsystems", 00:04:18.489 "env_dpdk_get_mem_stats", 00:04:18.489 "nbd_get_disks", 00:04:18.489 "nbd_stop_disk", 00:04:18.489 "nbd_start_disk", 00:04:18.489 "ublk_recover_disk", 00:04:18.489 "ublk_get_disks", 00:04:18.489 "ublk_stop_disk", 00:04:18.489 "ublk_start_disk", 00:04:18.489 "ublk_destroy_target", 00:04:18.489 "ublk_create_target", 00:04:18.489 "virtio_blk_create_transport", 00:04:18.489 "virtio_blk_get_transports", 00:04:18.489 "vhost_controller_set_coalescing", 00:04:18.489 "vhost_get_controllers", 00:04:18.489 "vhost_delete_controller", 00:04:18.489 "vhost_create_blk_controller", 00:04:18.489 "vhost_scsi_controller_remove_target", 00:04:18.489 "vhost_scsi_controller_add_target", 00:04:18.489 "vhost_start_scsi_controller", 00:04:18.489 "vhost_create_scsi_controller", 00:04:18.489 "thread_set_cpumask", 00:04:18.489 "scheduler_set_options", 00:04:18.489 "framework_get_governor", 00:04:18.489 "framework_get_scheduler", 00:04:18.489 "framework_set_scheduler", 00:04:18.489 "framework_get_reactors", 00:04:18.489 "thread_get_io_channels", 00:04:18.489 "thread_get_pollers", 00:04:18.489 "thread_get_stats", 00:04:18.489 "framework_monitor_context_switch", 00:04:18.489 "spdk_kill_instance", 00:04:18.489 "log_enable_timestamps", 00:04:18.489 "log_get_flags", 00:04:18.489 "log_clear_flag", 00:04:18.489 "log_set_flag", 00:04:18.489 "log_get_level", 00:04:18.489 "log_set_level", 00:04:18.489 "log_get_print_level", 00:04:18.489 "log_set_print_level", 00:04:18.489 "framework_enable_cpumask_locks", 00:04:18.489 "framework_disable_cpumask_locks", 00:04:18.489 "framework_wait_init", 00:04:18.489 "framework_start_init", 00:04:18.489 "scsi_get_devices", 00:04:18.489 "bdev_get_histogram", 00:04:18.489 "bdev_enable_histogram", 00:04:18.489 "bdev_set_qos_limit", 00:04:18.489 "bdev_set_qd_sampling_period", 00:04:18.489 "bdev_get_bdevs", 00:04:18.489 "bdev_reset_iostat", 00:04:18.489 "bdev_get_iostat", 00:04:18.489 "bdev_examine", 00:04:18.489 "bdev_wait_for_examine", 00:04:18.489 "bdev_set_options", 00:04:18.489 "accel_get_stats", 00:04:18.489 "accel_set_options", 00:04:18.489 "accel_set_driver", 00:04:18.489 "accel_crypto_key_destroy", 00:04:18.489 "accel_crypto_keys_get", 00:04:18.489 "accel_crypto_key_create", 00:04:18.489 "accel_assign_opc", 00:04:18.489 "accel_get_module_info", 00:04:18.489 "accel_get_opc_assignments", 00:04:18.489 "vmd_rescan", 00:04:18.489 "vmd_remove_device", 00:04:18.489 "vmd_enable", 00:04:18.489 "sock_get_default_impl", 00:04:18.489 "sock_set_default_impl", 00:04:18.489 "sock_impl_set_options", 00:04:18.489 "sock_impl_get_options", 00:04:18.489 "iobuf_get_stats", 00:04:18.489 "iobuf_set_options", 00:04:18.489 "keyring_get_keys", 00:04:18.489 "framework_get_pci_devices", 00:04:18.489 "framework_get_config", 00:04:18.489 "framework_get_subsystems", 00:04:18.489 "fsdev_set_opts", 00:04:18.489 "fsdev_get_opts", 00:04:18.489 "trace_get_info", 00:04:18.489 "trace_get_tpoint_group_mask", 00:04:18.489 "trace_disable_tpoint_group", 00:04:18.489 "trace_enable_tpoint_group", 00:04:18.489 "trace_clear_tpoint_mask", 00:04:18.489 "trace_set_tpoint_mask", 00:04:18.489 "notify_get_notifications", 00:04:18.489 "notify_get_types", 00:04:18.489 "spdk_get_version", 00:04:18.489 "rpc_get_methods" 00:04:18.489 ] 00:04:18.489 20:27:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:18.489 20:27:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.489 20:27:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.489 20:27:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:18.489 20:27:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57319 00:04:18.489 20:27:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57319 ']' 00:04:18.489 20:27:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57319 00:04:18.489 20:27:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:18.489 20:27:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.489 20:27:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57319 00:04:18.489 20:27:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.489 20:27:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.489 killing process with pid 57319 00:04:18.489 20:27:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57319' 00:04:18.489 20:27:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57319 00:04:18.489 20:27:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57319 00:04:18.752 ************************************ 00:04:18.752 END TEST spdkcli_tcp 00:04:18.752 ************************************ 00:04:18.752 00:04:18.752 real 0m1.591s 00:04:18.752 user 0m2.962s 00:04:18.752 sys 0m0.340s 00:04:18.752 20:27:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.752 20:27:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.752 20:27:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.752 20:27:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.752 20:27:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.752 20:27:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.752 ************************************ 00:04:18.752 START TEST dpdk_mem_utility 00:04:18.752 ************************************ 00:04:18.752 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.011 * Looking for test storage... 00:04:19.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:19.011 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.011 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.011 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.011 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:19.011 20:27:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:19.012 20:27:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.012 20:27:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:19.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.012 20:27:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.012 20:27:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.012 20:27:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.012 20:27:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.012 --rc genhtml_branch_coverage=1 00:04:19.012 --rc genhtml_function_coverage=1 00:04:19.012 --rc genhtml_legend=1 00:04:19.012 --rc geninfo_all_blocks=1 00:04:19.012 --rc geninfo_unexecuted_blocks=1 00:04:19.012 00:04:19.012 ' 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.012 --rc genhtml_branch_coverage=1 00:04:19.012 --rc genhtml_function_coverage=1 00:04:19.012 --rc genhtml_legend=1 00:04:19.012 --rc geninfo_all_blocks=1 00:04:19.012 --rc geninfo_unexecuted_blocks=1 00:04:19.012 00:04:19.012 ' 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.012 --rc genhtml_branch_coverage=1 00:04:19.012 --rc genhtml_function_coverage=1 00:04:19.012 --rc genhtml_legend=1 00:04:19.012 --rc geninfo_all_blocks=1 00:04:19.012 --rc geninfo_unexecuted_blocks=1 00:04:19.012 00:04:19.012 ' 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.012 --rc genhtml_branch_coverage=1 00:04:19.012 --rc genhtml_function_coverage=1 00:04:19.012 --rc genhtml_legend=1 00:04:19.012 --rc geninfo_all_blocks=1 00:04:19.012 --rc geninfo_unexecuted_blocks=1 00:04:19.012 00:04:19.012 ' 00:04:19.012 20:27:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:19.012 20:27:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57417 00:04:19.012 20:27:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57417 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57417 ']' 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.012 20:27:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.012 20:27:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:19.012 [2024-11-26 20:27:33.466649] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:19.012 [2024-11-26 20:27:33.466713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57417 ] 00:04:19.273 [2024-11-26 20:27:33.604215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.273 [2024-11-26 20:27:33.641862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.273 [2024-11-26 20:27:33.689681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:19.847 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.847 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:19.847 20:27:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:19.847 20:27:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:19.847 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.847 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.847 { 00:04:19.847 "filename": "/tmp/spdk_mem_dump.txt" 00:04:19.847 } 00:04:19.847 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.847 20:27:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:20.110 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:20.110 1 heaps totaling size 818.000000 MiB 00:04:20.110 size: 818.000000 MiB heap id: 0 00:04:20.110 end heaps---------- 00:04:20.110 9 mempools totaling size 603.782043 MiB 00:04:20.110 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:20.110 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:20.110 size: 100.555481 MiB name: bdev_io_57417 00:04:20.110 size: 50.003479 MiB name: msgpool_57417 00:04:20.110 size: 36.509338 MiB name: fsdev_io_57417 00:04:20.110 size: 21.763794 MiB name: PDU_Pool 00:04:20.110 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:20.110 size: 4.133484 MiB name: evtpool_57417 00:04:20.110 size: 0.026123 MiB name: Session_Pool 00:04:20.110 end mempools------- 00:04:20.110 6 memzones totaling size 4.142822 MiB 00:04:20.110 size: 1.000366 MiB name: RG_ring_0_57417 00:04:20.110 size: 1.000366 MiB name: RG_ring_1_57417 00:04:20.110 size: 1.000366 MiB name: RG_ring_4_57417 00:04:20.110 size: 1.000366 MiB name: RG_ring_5_57417 00:04:20.110 size: 0.125366 MiB name: RG_ring_2_57417 00:04:20.110 size: 0.015991 MiB name: RG_ring_3_57417 00:04:20.110 end memzones------- 00:04:20.110 20:27:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:20.110 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:04:20.110 list of free elements. size: 10.802490 MiB 00:04:20.110 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:20.110 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:20.110 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:20.110 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:20.110 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:20.110 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:20.110 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:20.110 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:20.110 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:04:20.110 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:20.110 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:20.110 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:20.110 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:20.110 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:20.110 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:20.110 list of standard malloc elements. size: 199.268616 MiB 00:04:20.110 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:20.110 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:20.110 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:20.110 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:20.110 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:20.110 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:20.110 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:20.110 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:20.110 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:20.110 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:20.110 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:20.110 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:20.110 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:20.111 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:20.111 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:20.111 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:20.111 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:20.112 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:20.112 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:20.112 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:20.112 list of memzone associated elements. size: 607.928894 MiB 00:04:20.112 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:20.112 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:20.112 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:20.112 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:20.112 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:20.112 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57417_0 00:04:20.112 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:20.112 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57417_0 00:04:20.112 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:20.112 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57417_0 00:04:20.112 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:20.112 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:20.112 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:20.112 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:20.112 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:20.112 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57417_0 00:04:20.112 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:20.112 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57417 00:04:20.112 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:20.112 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57417 00:04:20.112 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:20.112 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:20.112 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:20.112 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:20.112 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:20.112 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:20.113 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:20.113 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:20.113 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:20.113 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57417 00:04:20.113 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:20.113 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57417 00:04:20.113 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:20.113 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57417 00:04:20.113 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:20.113 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57417 00:04:20.113 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:20.113 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57417 00:04:20.113 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:20.113 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57417 00:04:20.113 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:20.113 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:20.113 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:20.113 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:20.113 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:20.113 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:20.113 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:20.113 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57417 00:04:20.113 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:20.113 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57417 00:04:20.113 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:20.113 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:20.113 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:20.113 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:20.113 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:20.113 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57417 00:04:20.113 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:20.113 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:20.113 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:20.113 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57417 00:04:20.113 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:20.113 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57417 00:04:20.113 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:20.113 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57417 00:04:20.113 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:20.113 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:20.113 20:27:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:20.113 20:27:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57417 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57417 ']' 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57417 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57417 00:04:20.113 killing process with pid 57417 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57417' 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57417 00:04:20.113 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57417 00:04:20.374 00:04:20.374 real 0m1.421s 00:04:20.374 user 0m1.563s 00:04:20.374 sys 0m0.284s 00:04:20.374 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.374 20:27:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.374 ************************************ 00:04:20.374 END TEST dpdk_mem_utility 00:04:20.374 ************************************ 00:04:20.374 20:27:34 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:20.374 20:27:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.374 20:27:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.374 20:27:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.374 ************************************ 00:04:20.374 START TEST event 00:04:20.374 ************************************ 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:20.374 * Looking for test storage... 00:04:20.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.374 20:27:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.374 20:27:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.374 20:27:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.374 20:27:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.374 20:27:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.374 20:27:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.374 20:27:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.374 20:27:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.374 20:27:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.374 20:27:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.374 20:27:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.374 20:27:34 event -- scripts/common.sh@344 -- # case "$op" in 00:04:20.374 20:27:34 event -- scripts/common.sh@345 -- # : 1 00:04:20.374 20:27:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.374 20:27:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.374 20:27:34 event -- scripts/common.sh@365 -- # decimal 1 00:04:20.374 20:27:34 event -- scripts/common.sh@353 -- # local d=1 00:04:20.374 20:27:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.374 20:27:34 event -- scripts/common.sh@355 -- # echo 1 00:04:20.374 20:27:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.374 20:27:34 event -- scripts/common.sh@366 -- # decimal 2 00:04:20.374 20:27:34 event -- scripts/common.sh@353 -- # local d=2 00:04:20.374 20:27:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.374 20:27:34 event -- scripts/common.sh@355 -- # echo 2 00:04:20.374 20:27:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.374 20:27:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.374 20:27:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.374 20:27:34 event -- scripts/common.sh@368 -- # return 0 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.374 --rc genhtml_branch_coverage=1 00:04:20.374 --rc genhtml_function_coverage=1 00:04:20.374 --rc genhtml_legend=1 00:04:20.374 --rc geninfo_all_blocks=1 00:04:20.374 --rc geninfo_unexecuted_blocks=1 00:04:20.374 00:04:20.374 ' 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.374 --rc genhtml_branch_coverage=1 00:04:20.374 --rc genhtml_function_coverage=1 00:04:20.374 --rc genhtml_legend=1 00:04:20.374 --rc geninfo_all_blocks=1 00:04:20.374 --rc geninfo_unexecuted_blocks=1 00:04:20.374 00:04:20.374 ' 00:04:20.374 20:27:34 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.374 --rc genhtml_branch_coverage=1 00:04:20.375 --rc genhtml_function_coverage=1 00:04:20.375 --rc genhtml_legend=1 00:04:20.375 --rc geninfo_all_blocks=1 00:04:20.375 --rc geninfo_unexecuted_blocks=1 00:04:20.375 00:04:20.375 ' 00:04:20.375 20:27:34 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.375 --rc genhtml_branch_coverage=1 00:04:20.375 --rc genhtml_function_coverage=1 00:04:20.375 --rc genhtml_legend=1 00:04:20.375 --rc geninfo_all_blocks=1 00:04:20.375 --rc geninfo_unexecuted_blocks=1 00:04:20.375 00:04:20.375 ' 00:04:20.375 20:27:34 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:20.375 20:27:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:20.375 20:27:34 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.375 20:27:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:20.375 20:27:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.375 20:27:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.634 ************************************ 00:04:20.634 START TEST event_perf 00:04:20.634 ************************************ 00:04:20.634 20:27:34 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.634 Running I/O for 1 seconds...[2024-11-26 20:27:34.949005] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:20.634 [2024-11-26 20:27:34.949176] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57497 ] 00:04:20.634 [2024-11-26 20:27:35.091273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.634 [2024-11-26 20:27:35.134887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.634 [2024-11-26 20:27:35.135468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.634 [2024-11-26 20:27:35.136073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.634 [2024-11-26 20:27:35.136204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.014 Running I/O for 1 seconds... 00:04:22.014 lcore 0: 172986 00:04:22.014 lcore 1: 172986 00:04:22.014 lcore 2: 172989 00:04:22.014 lcore 3: 172986 00:04:22.014 done. 00:04:22.014 00:04:22.014 real 0m1.239s 00:04:22.014 user 0m4.074s 00:04:22.014 sys 0m0.041s 00:04:22.014 20:27:36 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.014 ************************************ 00:04:22.014 20:27:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.014 END TEST event_perf 00:04:22.014 ************************************ 00:04:22.014 20:27:36 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:22.014 20:27:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:22.014 20:27:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.014 20:27:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.014 ************************************ 00:04:22.014 START TEST event_reactor 00:04:22.014 ************************************ 00:04:22.014 20:27:36 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:22.014 [2024-11-26 20:27:36.258822] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:22.014 [2024-11-26 20:27:36.259113] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57530 ] 00:04:22.014 [2024-11-26 20:27:36.397859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.014 [2024-11-26 20:27:36.437530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.948 test_start 00:04:22.948 oneshot 00:04:22.948 tick 100 00:04:22.948 tick 100 00:04:22.948 tick 250 00:04:22.948 tick 100 00:04:22.948 tick 100 00:04:22.948 tick 250 00:04:22.948 tick 100 00:04:22.948 tick 500 00:04:22.948 tick 100 00:04:22.948 tick 100 00:04:22.948 tick 250 00:04:22.948 tick 100 00:04:22.948 tick 100 00:04:22.948 test_end 00:04:22.948 00:04:22.948 real 0m1.228s 00:04:22.948 user 0m1.089s 00:04:22.948 sys 0m0.031s 00:04:22.948 ************************************ 00:04:22.948 END TEST event_reactor 00:04:22.948 ************************************ 00:04:22.948 20:27:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.948 20:27:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:23.206 20:27:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.206 20:27:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:23.206 20:27:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.206 20:27:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.206 ************************************ 00:04:23.206 START TEST event_reactor_perf 00:04:23.206 ************************************ 00:04:23.206 20:27:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.206 [2024-11-26 20:27:37.554338] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:23.206 [2024-11-26 20:27:37.554407] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57565 ] 00:04:23.206 [2024-11-26 20:27:37.693861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.206 [2024-11-26 20:27:37.731223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.587 test_start 00:04:24.587 test_end 00:04:24.587 Performance: 386842 events per second 00:04:24.587 ************************************ 00:04:24.587 END TEST event_reactor_perf 00:04:24.587 ************************************ 00:04:24.587 00:04:24.587 real 0m1.225s 00:04:24.587 user 0m1.087s 00:04:24.587 sys 0m0.031s 00:04:24.587 20:27:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.587 20:27:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.587 20:27:38 event -- event/event.sh@49 -- # uname -s 00:04:24.587 20:27:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:24.587 20:27:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:24.587 20:27:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.587 20:27:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.587 20:27:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.587 ************************************ 00:04:24.587 START TEST event_scheduler 00:04:24.587 ************************************ 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:24.587 * Looking for test storage... 00:04:24.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.587 20:27:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 20:27:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:24.587 20:27:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57635 00:04:24.587 20:27:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.587 20:27:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:24.587 20:27:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57635 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57635 ']' 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.587 20:27:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.587 [2024-11-26 20:27:39.034065] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:24.588 [2024-11-26 20:27:39.034273] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57635 ] 00:04:24.903 [2024-11-26 20:27:39.175631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:24.903 [2024-11-26 20:27:39.217526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.903 [2024-11-26 20:27:39.217822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.903 [2024-11-26 20:27:39.217861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.903 [2024-11-26 20:27:39.217980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.468 20:27:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.468 20:27:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:25.468 20:27:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:25.468 20:27:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.468 20:27:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.468 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:25.468 POWER: Cannot set governor of lcore 0 to userspace 00:04:25.468 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:25.468 POWER: Cannot set governor of lcore 0 to performance 00:04:25.468 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:25.468 POWER: Cannot set governor of lcore 0 to userspace 00:04:25.468 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:25.468 POWER: Cannot set governor of lcore 0 to userspace 00:04:25.468 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:25.468 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:25.468 POWER: Unable to set Power Management Environment for lcore 0 00:04:25.468 [2024-11-26 20:27:39.931442] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:25.468 [2024-11-26 20:27:39.931451] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:25.468 [2024-11-26 20:27:39.931457] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:25.468 [2024-11-26 20:27:39.931465] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:25.468 [2024-11-26 20:27:39.931469] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:25.468 [2024-11-26 20:27:39.931473] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:25.468 20:27:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.469 20:27:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:25.469 20:27:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.469 20:27:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.469 [2024-11-26 20:27:39.970039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:25.469 [2024-11-26 20:27:39.994646] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:25.469 20:27:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.469 20:27:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:25.469 20:27:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.469 20:27:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.469 20:27:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.469 ************************************ 00:04:25.469 START TEST scheduler_create_thread 00:04:25.469 ************************************ 00:04:25.469 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:25.469 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:25.469 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.469 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.726 2 00:04:25.726 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.726 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:25.726 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 3 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 4 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 5 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 6 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 7 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 8 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 9 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 10 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.727 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.293 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.293 20:27:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:26.293 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.293 20:27:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.666 20:27:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.666 20:27:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:27.666 20:27:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:27.666 20:27:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.666 20:27:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.599 ************************************ 00:04:28.599 END TEST scheduler_create_thread 00:04:28.599 ************************************ 00:04:28.599 20:27:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.599 00:04:28.599 real 0m3.090s 00:04:28.599 user 0m0.018s 00:04:28.599 sys 0m0.003s 00:04:28.599 20:27:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.599 20:27:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.599 20:27:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:28.599 20:27:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57635 00:04:28.599 20:27:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57635 ']' 00:04:28.599 20:27:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57635 00:04:28.599 20:27:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:28.599 20:27:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.599 20:27:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57635 00:04:28.856 killing process with pid 57635 00:04:28.856 20:27:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:28.856 20:27:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:28.856 20:27:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57635' 00:04:28.856 20:27:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57635 00:04:28.856 20:27:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57635 00:04:29.114 [2024-11-26 20:27:43.481594] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:29.114 ************************************ 00:04:29.114 END TEST event_scheduler 00:04:29.114 ************************************ 00:04:29.114 00:04:29.114 real 0m4.784s 00:04:29.114 user 0m9.237s 00:04:29.114 sys 0m0.268s 00:04:29.114 20:27:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.114 20:27:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.114 20:27:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:29.372 20:27:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:29.372 20:27:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.372 20:27:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.372 20:27:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.372 ************************************ 00:04:29.372 START TEST app_repeat 00:04:29.372 ************************************ 00:04:29.372 20:27:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57729 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.372 Process app_repeat pid: 57729 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57729' 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.372 spdk_app_start Round 0 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57729 /var/tmp/spdk-nbd.sock 00:04:29.372 20:27:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:29.372 20:27:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57729 ']' 00:04:29.372 20:27:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.372 20:27:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.372 20:27:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.372 20:27:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.372 20:27:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.372 [2024-11-26 20:27:43.704432] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:29.372 [2024-11-26 20:27:43.704490] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57729 ] 00:04:29.372 [2024-11-26 20:27:43.839806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.372 [2024-11-26 20:27:43.877264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.372 [2024-11-26 20:27:43.877364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.372 [2024-11-26 20:27:43.908825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:29.629 20:27:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.629 20:27:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:29.629 20:27:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.629 Malloc0 00:04:29.629 20:27:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.887 Malloc1 00:04:29.887 20:27:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.887 20:27:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.145 /dev/nbd0 00:04:30.145 20:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.145 20:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.145 1+0 records in 00:04:30.145 1+0 records out 00:04:30.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184681 s, 22.2 MB/s 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.145 20:27:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.145 20:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.145 20:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.145 20:27:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.403 /dev/nbd1 00:04:30.403 20:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.403 20:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.403 1+0 records in 00:04:30.403 1+0 records out 00:04:30.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226955 s, 18.0 MB/s 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.403 20:27:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.403 20:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.403 20:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.403 20:27:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.403 20:27:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.403 20:27:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.664 { 00:04:30.664 "nbd_device": "/dev/nbd0", 00:04:30.664 "bdev_name": "Malloc0" 00:04:30.664 }, 00:04:30.664 { 00:04:30.664 "nbd_device": "/dev/nbd1", 00:04:30.664 "bdev_name": "Malloc1" 00:04:30.664 } 00:04:30.664 ]' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.664 { 00:04:30.664 "nbd_device": "/dev/nbd0", 00:04:30.664 "bdev_name": "Malloc0" 00:04:30.664 }, 00:04:30.664 { 00:04:30.664 "nbd_device": "/dev/nbd1", 00:04:30.664 "bdev_name": "Malloc1" 00:04:30.664 } 00:04:30.664 ]' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.664 /dev/nbd1' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.664 /dev/nbd1' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.664 256+0 records in 00:04:30.664 256+0 records out 00:04:30.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00799908 s, 131 MB/s 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.664 256+0 records in 00:04:30.664 256+0 records out 00:04:30.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174275 s, 60.2 MB/s 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.664 256+0 records in 00:04:30.664 256+0 records out 00:04:30.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020992 s, 50.0 MB/s 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.664 20:27:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.924 20:27:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.183 20:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.441 20:27:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.441 20:27:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.699 20:27:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.699 [2024-11-26 20:27:46.162135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.699 [2024-11-26 20:27:46.199411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.699 [2024-11-26 20:27:46.199529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.699 [2024-11-26 20:27:46.231259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.699 [2024-11-26 20:27:46.231317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.699 [2024-11-26 20:27:46.231324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.995 spdk_app_start Round 1 00:04:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.995 20:27:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.995 20:27:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:34.995 20:27:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57729 /var/tmp/spdk-nbd.sock 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57729 ']' 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.995 20:27:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:34.995 20:27:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.995 Malloc0 00:04:34.995 20:27:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.270 Malloc1 00:04:35.270 20:27:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.270 20:27:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.531 /dev/nbd0 00:04:35.531 20:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.531 20:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.531 1+0 records in 00:04:35.531 1+0 records out 00:04:35.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223013 s, 18.4 MB/s 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:35.531 20:27:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:35.531 20:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.531 20:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.531 20:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.792 /dev/nbd1 00:04:35.792 20:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.792 20:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:35.792 20:27:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.792 1+0 records in 00:04:35.792 1+0 records out 00:04:35.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430666 s, 9.5 MB/s 00:04:35.793 20:27:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.793 20:27:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:35.793 20:27:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.793 20:27:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:35.793 20:27:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:35.793 20:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.793 20:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.793 20:27:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.793 20:27:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.793 20:27:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.054 { 00:04:36.054 "nbd_device": "/dev/nbd0", 00:04:36.054 "bdev_name": "Malloc0" 00:04:36.054 }, 00:04:36.054 { 00:04:36.054 "nbd_device": "/dev/nbd1", 00:04:36.054 "bdev_name": "Malloc1" 00:04:36.054 } 00:04:36.054 ]' 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.054 { 00:04:36.054 "nbd_device": "/dev/nbd0", 00:04:36.054 "bdev_name": "Malloc0" 00:04:36.054 }, 00:04:36.054 { 00:04:36.054 "nbd_device": "/dev/nbd1", 00:04:36.054 "bdev_name": "Malloc1" 00:04:36.054 } 00:04:36.054 ]' 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.054 /dev/nbd1' 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.054 /dev/nbd1' 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.054 20:27:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.055 256+0 records in 00:04:36.055 256+0 records out 00:04:36.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00674669 s, 155 MB/s 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.055 256+0 records in 00:04:36.055 256+0 records out 00:04:36.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154128 s, 68.0 MB/s 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.055 256+0 records in 00:04:36.055 256+0 records out 00:04:36.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224747 s, 46.7 MB/s 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.055 20:27:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.314 20:27:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.571 20:27:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.572 20:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.829 20:27:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.829 20:27:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.086 20:27:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.086 [2024-11-26 20:27:51.559512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.086 [2024-11-26 20:27:51.595773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.086 [2024-11-26 20:27:51.595932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.086 [2024-11-26 20:27:51.628623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:37.086 [2024-11-26 20:27:51.628684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.086 [2024-11-26 20:27:51.628693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.454 spdk_app_start Round 2 00:04:40.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.454 20:27:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.454 20:27:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:40.454 20:27:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57729 /var/tmp/spdk-nbd.sock 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57729 ']' 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.454 20:27:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:40.454 20:27:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.454 Malloc0 00:04:40.454 20:27:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.712 Malloc1 00:04:40.712 20:27:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.712 20:27:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.969 /dev/nbd0 00:04:40.969 20:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.969 20:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.969 1+0 records in 00:04:40.969 1+0 records out 00:04:40.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027181 s, 15.1 MB/s 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.969 20:27:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.969 20:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.969 20:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.969 20:27:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.226 /dev/nbd1 00:04:41.226 20:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.226 20:27:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.226 1+0 records in 00:04:41.226 1+0 records out 00:04:41.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235363 s, 17.4 MB/s 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:41.226 20:27:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.227 20:27:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:41.227 20:27:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:41.227 20:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.227 20:27:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.227 20:27:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.227 20:27:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.227 20:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.486 { 00:04:41.486 "nbd_device": "/dev/nbd0", 00:04:41.486 "bdev_name": "Malloc0" 00:04:41.486 }, 00:04:41.486 { 00:04:41.486 "nbd_device": "/dev/nbd1", 00:04:41.486 "bdev_name": "Malloc1" 00:04:41.486 } 00:04:41.486 ]' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.486 { 00:04:41.486 "nbd_device": "/dev/nbd0", 00:04:41.486 "bdev_name": "Malloc0" 00:04:41.486 }, 00:04:41.486 { 00:04:41.486 "nbd_device": "/dev/nbd1", 00:04:41.486 "bdev_name": "Malloc1" 00:04:41.486 } 00:04:41.486 ]' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.486 /dev/nbd1' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.486 /dev/nbd1' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.486 256+0 records in 00:04:41.486 256+0 records out 00:04:41.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468818 s, 224 MB/s 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.486 256+0 records in 00:04:41.486 256+0 records out 00:04:41.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131082 s, 80.0 MB/s 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.486 256+0 records in 00:04:41.486 256+0 records out 00:04:41.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159091 s, 65.9 MB/s 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.486 20:27:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.744 20:27:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.002 20:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.261 20:27:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.261 20:27:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.261 20:27:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.521 [2024-11-26 20:27:56.846147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.521 [2024-11-26 20:27:56.877342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.521 [2024-11-26 20:27:56.877478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.521 [2024-11-26 20:27:56.907280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.521 [2024-11-26 20:27:56.907331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.521 [2024-11-26 20:27:56.907337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.803 20:27:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57729 /var/tmp/spdk-nbd.sock 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57729 ']' 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:45.803 20:27:59 event.app_repeat -- event/event.sh@39 -- # killprocess 57729 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57729 ']' 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57729 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.803 20:27:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57729 00:04:45.803 killing process with pid 57729 00:04:45.803 20:28:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.803 20:28:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.803 20:28:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57729' 00:04:45.803 20:28:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57729 00:04:45.803 20:28:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57729 00:04:45.803 spdk_app_start is called in Round 0. 00:04:45.803 Shutdown signal received, stop current app iteration 00:04:45.803 Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 reinitialization... 00:04:45.803 spdk_app_start is called in Round 1. 00:04:45.803 Shutdown signal received, stop current app iteration 00:04:45.803 Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 reinitialization... 00:04:45.803 spdk_app_start is called in Round 2. 00:04:45.803 Shutdown signal received, stop current app iteration 00:04:45.803 Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 reinitialization... 00:04:45.803 spdk_app_start is called in Round 3. 00:04:45.803 Shutdown signal received, stop current app iteration 00:04:45.803 ************************************ 00:04:45.803 END TEST app_repeat 00:04:45.803 ************************************ 00:04:45.803 20:28:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:45.803 20:28:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:45.803 00:04:45.803 real 0m16.425s 00:04:45.803 user 0m36.848s 00:04:45.803 sys 0m2.066s 00:04:45.803 20:28:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.803 20:28:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.803 20:28:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:45.803 20:28:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:45.803 20:28:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.803 20:28:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.803 20:28:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.803 ************************************ 00:04:45.803 START TEST cpu_locks 00:04:45.803 ************************************ 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:45.803 * Looking for test storage... 00:04:45.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.803 20:28:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.803 --rc genhtml_branch_coverage=1 00:04:45.803 --rc genhtml_function_coverage=1 00:04:45.803 --rc genhtml_legend=1 00:04:45.803 --rc geninfo_all_blocks=1 00:04:45.803 --rc geninfo_unexecuted_blocks=1 00:04:45.803 00:04:45.803 ' 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.803 --rc genhtml_branch_coverage=1 00:04:45.803 --rc genhtml_function_coverage=1 00:04:45.803 --rc genhtml_legend=1 00:04:45.803 --rc geninfo_all_blocks=1 00:04:45.803 --rc geninfo_unexecuted_blocks=1 00:04:45.803 00:04:45.803 ' 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.803 --rc genhtml_branch_coverage=1 00:04:45.803 --rc genhtml_function_coverage=1 00:04:45.803 --rc genhtml_legend=1 00:04:45.803 --rc geninfo_all_blocks=1 00:04:45.803 --rc geninfo_unexecuted_blocks=1 00:04:45.803 00:04:45.803 ' 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.803 --rc genhtml_branch_coverage=1 00:04:45.803 --rc genhtml_function_coverage=1 00:04:45.803 --rc genhtml_legend=1 00:04:45.803 --rc geninfo_all_blocks=1 00:04:45.803 --rc geninfo_unexecuted_blocks=1 00:04:45.803 00:04:45.803 ' 00:04:45.803 20:28:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:45.803 20:28:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:45.803 20:28:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:45.803 20:28:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.803 20:28:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.803 ************************************ 00:04:45.803 START TEST default_locks 00:04:45.803 ************************************ 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58146 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58146 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58146 ']' 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.803 20:28:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.803 [2024-11-26 20:28:00.330010] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:45.803 [2024-11-26 20:28:00.330063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58146 ] 00:04:46.063 [2024-11-26 20:28:00.466909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.063 [2024-11-26 20:28:00.502510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.063 [2024-11-26 20:28:00.545218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58146 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58146 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58146 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58146 ']' 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58146 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58146 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58146' 00:04:46.998 killing process with pid 58146 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58146 00:04:46.998 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58146 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58146 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58146 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:47.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.255 ERROR: process (pid: 58146) is no longer running 00:04:47.255 ************************************ 00:04:47.255 END TEST default_locks 00:04:47.255 ************************************ 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58146 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58146 ']' 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.255 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58146) - No such process 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:47.255 00:04:47.255 real 0m1.355s 00:04:47.255 user 0m1.461s 00:04:47.255 sys 0m0.325s 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.255 20:28:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.255 20:28:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:47.255 20:28:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.255 20:28:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.255 20:28:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.255 ************************************ 00:04:47.255 START TEST default_locks_via_rpc 00:04:47.255 ************************************ 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58193 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58193 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58193 ']' 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.255 20:28:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.255 [2024-11-26 20:28:01.729994] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:47.255 [2024-11-26 20:28:01.730147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58193 ] 00:04:47.511 [2024-11-26 20:28:01.870137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.512 [2024-11-26 20:28:01.906895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.512 [2024-11-26 20:28:01.950006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58193 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58193 00:04:48.076 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58193 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58193 ']' 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58193 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58193 00:04:48.332 killing process with pid 58193 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58193' 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58193 00:04:48.332 20:28:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58193 00:04:48.589 00:04:48.589 real 0m1.357s 00:04:48.589 user 0m1.462s 00:04:48.589 sys 0m0.346s 00:04:48.589 20:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.589 ************************************ 00:04:48.589 END TEST default_locks_via_rpc 00:04:48.589 ************************************ 00:04:48.589 20:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.589 20:28:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:48.589 20:28:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.589 20:28:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.589 20:28:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.589 ************************************ 00:04:48.589 START TEST non_locking_app_on_locked_coremask 00:04:48.589 ************************************ 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58233 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58233 /var/tmp/spdk.sock 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58233 ']' 00:04:48.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.589 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.846 [2024-11-26 20:28:03.149057] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:48.846 [2024-11-26 20:28:03.149239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58233 ] 00:04:48.846 [2024-11-26 20:28:03.290123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.846 [2024-11-26 20:28:03.325447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.846 [2024-11-26 20:28:03.370005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58247 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58247 /var/tmp/spdk2.sock 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58247 ']' 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.103 20:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.103 [2024-11-26 20:28:03.539044] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:49.103 [2024-11-26 20:28:03.539264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:04:49.458 [2024-11-26 20:28:03.692802] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.458 [2024-11-26 20:28:03.692850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.458 [2024-11-26 20:28:03.765190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.458 [2024-11-26 20:28:03.854194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:50.026 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.026 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:50.026 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58233 00:04:50.026 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.026 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58233 00:04:50.283 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58233 00:04:50.283 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58233 ']' 00:04:50.283 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58233 00:04:50.283 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:50.283 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.283 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58233 00:04:50.539 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.539 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.540 killing process with pid 58233 00:04:50.540 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58233' 00:04:50.540 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58233 00:04:50.540 20:28:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58233 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58247 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58247 ']' 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58247 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58247 00:04:50.797 killing process with pid 58247 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58247' 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58247 00:04:50.797 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58247 00:04:51.054 ************************************ 00:04:51.054 END TEST non_locking_app_on_locked_coremask 00:04:51.054 ************************************ 00:04:51.054 00:04:51.054 real 0m2.361s 00:04:51.054 user 0m2.652s 00:04:51.054 sys 0m0.643s 00:04:51.054 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.054 20:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.054 20:28:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:51.054 20:28:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.054 20:28:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.054 20:28:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.055 ************************************ 00:04:51.055 START TEST locking_app_on_unlocked_coremask 00:04:51.055 ************************************ 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58298 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58298 /var/tmp/spdk.sock 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.055 20:28:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:51.055 [2024-11-26 20:28:05.553121] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:51.055 [2024-11-26 20:28:05.553186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:04:51.312 [2024-11-26 20:28:05.693844] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.312 [2024-11-26 20:28:05.694017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.312 [2024-11-26 20:28:05.730140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.312 [2024-11-26 20:28:05.774725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58308 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58308 /var/tmp/spdk2.sock 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58308 ']' 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.876 20:28:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.133 [2024-11-26 20:28:06.456224] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:52.134 [2024-11-26 20:28:06.456409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58308 ] 00:04:52.134 [2024-11-26 20:28:06.608974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.134 [2024-11-26 20:28:06.680893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.392 [2024-11-26 20:28:06.767846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:53.037 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.037 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:53.037 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58308 00:04:53.037 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58308 00:04:53.037 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58298 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58298 ']' 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58298 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58298 00:04:53.297 killing process with pid 58298 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58298' 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58298 00:04:53.297 20:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58298 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58308 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58308 ']' 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58308 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58308 00:04:53.555 killing process with pid 58308 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58308' 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58308 00:04:53.555 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58308 00:04:53.815 00:04:53.815 real 0m2.750s 00:04:53.815 user 0m3.137s 00:04:53.815 sys 0m0.639s 00:04:53.815 ************************************ 00:04:53.815 END TEST locking_app_on_unlocked_coremask 00:04:53.815 ************************************ 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.815 20:28:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:53.815 20:28:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.815 20:28:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.815 20:28:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.815 ************************************ 00:04:53.815 START TEST locking_app_on_locked_coremask 00:04:53.815 ************************************ 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:53.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58364 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58364 /var/tmp/spdk.sock 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58364 ']' 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.815 20:28:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.815 [2024-11-26 20:28:08.335501] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:53.815 [2024-11-26 20:28:08.335675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58364 ] 00:04:54.073 [2024-11-26 20:28:08.471443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.073 [2024-11-26 20:28:08.506847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.073 [2024-11-26 20:28:08.550463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58380 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58380 /var/tmp/spdk2.sock 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58380 /var/tmp/spdk2.sock 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58380 /var/tmp/spdk2.sock 00:04:55.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58380 ']' 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.008 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.008 [2024-11-26 20:28:09.242243] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:55.008 [2024-11-26 20:28:09.242420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58380 ] 00:04:55.008 [2024-11-26 20:28:09.393174] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58364 has claimed it. 00:04:55.008 [2024-11-26 20:28:09.393229] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.574 ERROR: process (pid: 58380) is no longer running 00:04:55.574 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58380) - No such process 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58364 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58364 00:04:55.574 20:28:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58364 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58364 ']' 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58364 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58364 00:04:55.831 killing process with pid 58364 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58364' 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58364 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58364 00:04:55.831 ************************************ 00:04:55.831 END TEST locking_app_on_locked_coremask 00:04:55.831 ************************************ 00:04:55.831 00:04:55.831 real 0m2.076s 00:04:55.831 user 0m2.387s 00:04:55.831 sys 0m0.400s 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.831 20:28:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.089 20:28:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:56.089 20:28:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.089 20:28:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.089 20:28:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.090 ************************************ 00:04:56.090 START TEST locking_overlapped_coremask 00:04:56.090 ************************************ 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58420 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58420 /var/tmp/spdk.sock 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58420 ']' 00:04:56.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.090 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.090 [2024-11-26 20:28:10.446422] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:56.090 [2024-11-26 20:28:10.446486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58420 ] 00:04:56.090 [2024-11-26 20:28:10.583494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.090 [2024-11-26 20:28:10.623859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.090 [2024-11-26 20:28:10.623908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.090 [2024-11-26 20:28:10.623909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.349 [2024-11-26 20:28:10.670343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58438 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58438 /var/tmp/spdk2.sock 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58438 /var/tmp/spdk2.sock 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58438 /var/tmp/spdk2.sock 00:04:56.916 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58438 ']' 00:04:56.917 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.917 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.917 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.917 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.917 20:28:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.917 [2024-11-26 20:28:11.356380] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:56.917 [2024-11-26 20:28:11.356445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58438 ] 00:04:57.178 [2024-11-26 20:28:11.509617] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58420 has claimed it. 00:04:57.178 [2024-11-26 20:28:11.509675] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:57.746 ERROR: process (pid: 58438) is no longer running 00:04:57.746 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58438) - No such process 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58420 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58420 ']' 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58420 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58420 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58420' 00:04:57.746 killing process with pid 58420 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58420 00:04:57.746 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58420 00:04:58.005 00:04:58.005 real 0m1.898s 00:04:58.005 user 0m5.471s 00:04:58.005 sys 0m0.276s 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.005 ************************************ 00:04:58.005 END TEST locking_overlapped_coremask 00:04:58.005 ************************************ 00:04:58.005 20:28:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.005 20:28:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.005 20:28:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.005 20:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.005 ************************************ 00:04:58.005 START TEST locking_overlapped_coremask_via_rpc 00:04:58.005 ************************************ 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58478 00:04:58.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58478 /var/tmp/spdk.sock 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58478 ']' 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.005 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.006 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.006 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.006 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.006 20:28:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.006 [2024-11-26 20:28:12.412179] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:58.006 [2024-11-26 20:28:12.412363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58478 ] 00:04:58.006 [2024-11-26 20:28:12.550160] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.006 [2024-11-26 20:28:12.550333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.264 [2024-11-26 20:28:12.589117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.264 [2024-11-26 20:28:12.589431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.264 [2024-11-26 20:28:12.589433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.264 [2024-11-26 20:28:12.635392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58496 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58496 /var/tmp/spdk2.sock 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58496 ']' 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.836 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.837 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.837 20:28:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.837 [2024-11-26 20:28:13.273440] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:04:58.837 [2024-11-26 20:28:13.273520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58496 ] 00:04:59.098 [2024-11-26 20:28:13.428045] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.098 [2024-11-26 20:28:13.428089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.098 [2024-11-26 20:28:13.502803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.098 [2024-11-26 20:28:13.509707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.098 [2024-11-26 20:28:13.509707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:59.098 [2024-11-26 20:28:13.598921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.671 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.671 [2024-11-26 20:28:14.183720] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58478 has claimed it. 00:04:59.671 request: 00:04:59.671 { 00:04:59.671 "method": "framework_enable_cpumask_locks", 00:04:59.671 "req_id": 1 00:04:59.671 } 00:04:59.672 Got JSON-RPC error response 00:04:59.672 response: 00:04:59.672 { 00:04:59.672 "code": -32603, 00:04:59.672 "message": "Failed to claim CPU core: 2" 00:04:59.672 } 00:04:59.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58478 /var/tmp/spdk.sock 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58478 ']' 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.672 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58496 /var/tmp/spdk2.sock 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58496 ']' 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.932 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.254 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.254 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.254 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:00.254 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:00.254 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:00.255 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:00.255 00:05:00.255 real 0m2.265s 00:05:00.255 user 0m1.043s 00:05:00.255 sys 0m0.145s 00:05:00.255 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.255 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.255 ************************************ 00:05:00.255 END TEST locking_overlapped_coremask_via_rpc 00:05:00.255 ************************************ 00:05:00.255 20:28:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:00.255 20:28:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58478 ]] 00:05:00.255 20:28:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58478 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58478 ']' 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58478 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58478 00:05:00.255 killing process with pid 58478 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58478' 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58478 00:05:00.255 20:28:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58478 00:05:00.514 20:28:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58496 ]] 00:05:00.514 20:28:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58496 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58496 ']' 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58496 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58496 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58496' 00:05:00.514 killing process with pid 58496 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58496 00:05:00.514 20:28:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58496 00:05:00.776 20:28:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:00.776 20:28:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:00.776 20:28:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58478 ]] 00:05:00.776 20:28:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58478 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58478 ']' 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58478 00:05:00.776 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58478) - No such process 00:05:00.776 Process with pid 58478 is not found 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58478 is not found' 00:05:00.776 Process with pid 58496 is not found 00:05:00.776 20:28:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58496 ]] 00:05:00.776 20:28:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58496 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58496 ']' 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58496 00:05:00.776 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58496) - No such process 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58496 is not found' 00:05:00.776 20:28:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:00.776 ************************************ 00:05:00.776 END TEST cpu_locks 00:05:00.776 ************************************ 00:05:00.776 00:05:00.776 real 0m15.037s 00:05:00.776 user 0m27.987s 00:05:00.776 sys 0m3.384s 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.776 20:28:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.776 ************************************ 00:05:00.776 END TEST event 00:05:00.776 ************************************ 00:05:00.776 00:05:00.776 real 0m40.465s 00:05:00.776 user 1m20.504s 00:05:00.776 sys 0m6.061s 00:05:00.776 20:28:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.776 20:28:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.776 20:28:15 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:00.776 20:28:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.776 20:28:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.776 20:28:15 -- common/autotest_common.sh@10 -- # set +x 00:05:00.776 ************************************ 00:05:00.776 START TEST thread 00:05:00.776 ************************************ 00:05:00.776 20:28:15 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:01.038 * Looking for test storage... 00:05:01.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.038 20:28:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.038 20:28:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.038 20:28:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.038 20:28:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.038 20:28:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.038 20:28:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.038 20:28:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.038 20:28:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.038 20:28:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.038 20:28:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.038 20:28:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.038 20:28:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:01.038 20:28:15 thread -- scripts/common.sh@345 -- # : 1 00:05:01.038 20:28:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.038 20:28:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.038 20:28:15 thread -- scripts/common.sh@365 -- # decimal 1 00:05:01.038 20:28:15 thread -- scripts/common.sh@353 -- # local d=1 00:05:01.038 20:28:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.038 20:28:15 thread -- scripts/common.sh@355 -- # echo 1 00:05:01.038 20:28:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.038 20:28:15 thread -- scripts/common.sh@366 -- # decimal 2 00:05:01.038 20:28:15 thread -- scripts/common.sh@353 -- # local d=2 00:05:01.038 20:28:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.038 20:28:15 thread -- scripts/common.sh@355 -- # echo 2 00:05:01.038 20:28:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.038 20:28:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.038 20:28:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.038 20:28:15 thread -- scripts/common.sh@368 -- # return 0 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.038 --rc genhtml_branch_coverage=1 00:05:01.038 --rc genhtml_function_coverage=1 00:05:01.038 --rc genhtml_legend=1 00:05:01.038 --rc geninfo_all_blocks=1 00:05:01.038 --rc geninfo_unexecuted_blocks=1 00:05:01.038 00:05:01.038 ' 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.038 --rc genhtml_branch_coverage=1 00:05:01.038 --rc genhtml_function_coverage=1 00:05:01.038 --rc genhtml_legend=1 00:05:01.038 --rc geninfo_all_blocks=1 00:05:01.038 --rc geninfo_unexecuted_blocks=1 00:05:01.038 00:05:01.038 ' 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.038 --rc genhtml_branch_coverage=1 00:05:01.038 --rc genhtml_function_coverage=1 00:05:01.038 --rc genhtml_legend=1 00:05:01.038 --rc geninfo_all_blocks=1 00:05:01.038 --rc geninfo_unexecuted_blocks=1 00:05:01.038 00:05:01.038 ' 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.038 --rc genhtml_branch_coverage=1 00:05:01.038 --rc genhtml_function_coverage=1 00:05:01.038 --rc genhtml_legend=1 00:05:01.038 --rc geninfo_all_blocks=1 00:05:01.038 --rc geninfo_unexecuted_blocks=1 00:05:01.038 00:05:01.038 ' 00:05:01.038 20:28:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.038 20:28:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.038 ************************************ 00:05:01.038 START TEST thread_poller_perf 00:05:01.038 ************************************ 00:05:01.038 20:28:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.038 [2024-11-26 20:28:15.463495] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:01.038 [2024-11-26 20:28:15.463876] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58621 ] 00:05:01.299 [2024-11-26 20:28:15.605242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.299 [2024-11-26 20:28:15.643283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.299 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:02.234 [2024-11-26T20:28:16.789Z] ====================================== 00:05:02.234 [2024-11-26T20:28:16.789Z] busy:2611189756 (cyc) 00:05:02.234 [2024-11-26T20:28:16.789Z] total_run_count: 309000 00:05:02.234 [2024-11-26T20:28:16.789Z] tsc_hz: 2600000000 (cyc) 00:05:02.234 [2024-11-26T20:28:16.789Z] ====================================== 00:05:02.234 [2024-11-26T20:28:16.789Z] poller_cost: 8450 (cyc), 3250 (nsec) 00:05:02.234 ************************************ 00:05:02.234 END TEST thread_poller_perf 00:05:02.234 ************************************ 00:05:02.234 00:05:02.234 real 0m1.234s 00:05:02.234 user 0m1.104s 00:05:02.234 sys 0m0.024s 00:05:02.234 20:28:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.234 20:28:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.234 20:28:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.234 20:28:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:02.234 20:28:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.234 20:28:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.234 ************************************ 00:05:02.234 START TEST thread_poller_perf 00:05:02.234 ************************************ 00:05:02.234 20:28:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.234 [2024-11-26 20:28:16.747496] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:02.234 [2024-11-26 20:28:16.747572] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58652 ] 00:05:02.495 [2024-11-26 20:28:16.888128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.495 [2024-11-26 20:28:16.946338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.495 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:03.438 [2024-11-26T20:28:17.993Z] ====================================== 00:05:03.438 [2024-11-26T20:28:17.993Z] busy:2603049942 (cyc) 00:05:03.438 [2024-11-26T20:28:17.993Z] total_run_count: 3937000 00:05:03.438 [2024-11-26T20:28:17.993Z] tsc_hz: 2600000000 (cyc) 00:05:03.438 [2024-11-26T20:28:17.993Z] ====================================== 00:05:03.438 [2024-11-26T20:28:17.993Z] poller_cost: 661 (cyc), 254 (nsec) 00:05:03.438 00:05:03.438 real 0m1.248s 00:05:03.438 user 0m1.105s 00:05:03.438 sys 0m0.035s 00:05:03.438 20:28:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.438 20:28:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.438 ************************************ 00:05:03.438 END TEST thread_poller_perf 00:05:03.438 ************************************ 00:05:03.700 20:28:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:03.700 00:05:03.700 real 0m2.742s 00:05:03.700 user 0m2.325s 00:05:03.700 sys 0m0.172s 00:05:03.700 20:28:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.700 20:28:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.700 ************************************ 00:05:03.700 END TEST thread 00:05:03.700 ************************************ 00:05:03.700 20:28:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:03.700 20:28:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:03.700 20:28:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.700 20:28:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.700 20:28:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.700 ************************************ 00:05:03.700 START TEST app_cmdline 00:05:03.700 ************************************ 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:03.700 * Looking for test storage... 00:05:03.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:03.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.700 20:28:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.700 --rc genhtml_branch_coverage=1 00:05:03.700 --rc genhtml_function_coverage=1 00:05:03.700 --rc genhtml_legend=1 00:05:03.700 --rc geninfo_all_blocks=1 00:05:03.700 --rc geninfo_unexecuted_blocks=1 00:05:03.700 00:05:03.700 ' 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.700 --rc genhtml_branch_coverage=1 00:05:03.700 --rc genhtml_function_coverage=1 00:05:03.700 --rc genhtml_legend=1 00:05:03.700 --rc geninfo_all_blocks=1 00:05:03.700 --rc geninfo_unexecuted_blocks=1 00:05:03.700 00:05:03.700 ' 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.700 --rc genhtml_branch_coverage=1 00:05:03.700 --rc genhtml_function_coverage=1 00:05:03.700 --rc genhtml_legend=1 00:05:03.700 --rc geninfo_all_blocks=1 00:05:03.700 --rc geninfo_unexecuted_blocks=1 00:05:03.700 00:05:03.700 ' 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.700 --rc genhtml_branch_coverage=1 00:05:03.700 --rc genhtml_function_coverage=1 00:05:03.700 --rc genhtml_legend=1 00:05:03.700 --rc geninfo_all_blocks=1 00:05:03.700 --rc geninfo_unexecuted_blocks=1 00:05:03.700 00:05:03.700 ' 00:05:03.700 20:28:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:03.700 20:28:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58739 00:05:03.700 20:28:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58739 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 58739 ']' 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.700 20:28:18 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.700 20:28:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:03.962 [2024-11-26 20:28:18.282726] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:03.962 [2024-11-26 20:28:18.282894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58739 ] 00:05:03.962 [2024-11-26 20:28:18.417858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.962 [2024-11-26 20:28:18.454319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.962 [2024-11-26 20:28:18.500486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.898 20:28:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.898 20:28:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:04.898 20:28:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:04.898 { 00:05:04.898 "version": "SPDK v25.01-pre git sha1 97329b16b", 00:05:04.898 "fields": { 00:05:04.898 "major": 25, 00:05:04.898 "minor": 1, 00:05:04.898 "patch": 0, 00:05:04.898 "suffix": "-pre", 00:05:04.898 "commit": "97329b16b" 00:05:04.898 } 00:05:04.898 } 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:05.157 20:28:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.157 20:28:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:05.157 20:28:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:05.157 20:28:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:05.157 20:28:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:05.157 20:28:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:05.157 20:28:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:05.158 request: 00:05:05.158 { 00:05:05.158 "method": "env_dpdk_get_mem_stats", 00:05:05.158 "req_id": 1 00:05:05.158 } 00:05:05.158 Got JSON-RPC error response 00:05:05.158 response: 00:05:05.158 { 00:05:05.158 "code": -32601, 00:05:05.158 "message": "Method not found" 00:05:05.158 } 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.158 20:28:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58739 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 58739 ']' 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 58739 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.158 20:28:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58739 00:05:05.418 killing process with pid 58739 00:05:05.418 20:28:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.418 20:28:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.418 20:28:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58739' 00:05:05.418 20:28:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 58739 00:05:05.418 20:28:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 58739 00:05:05.418 ************************************ 00:05:05.418 END TEST app_cmdline 00:05:05.418 ************************************ 00:05:05.418 00:05:05.418 real 0m1.826s 00:05:05.418 user 0m2.321s 00:05:05.418 sys 0m0.337s 00:05:05.418 20:28:19 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.418 20:28:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:05.418 20:28:19 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:05.418 20:28:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.418 20:28:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.418 20:28:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.418 ************************************ 00:05:05.418 START TEST version 00:05:05.418 ************************************ 00:05:05.418 20:28:19 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:05.679 * Looking for test storage... 00:05:05.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.679 20:28:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.679 20:28:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.679 20:28:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.679 20:28:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.679 20:28:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.679 20:28:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.679 20:28:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.679 20:28:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.679 20:28:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.679 20:28:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.679 20:28:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.679 20:28:20 version -- scripts/common.sh@344 -- # case "$op" in 00:05:05.679 20:28:20 version -- scripts/common.sh@345 -- # : 1 00:05:05.679 20:28:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.679 20:28:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.679 20:28:20 version -- scripts/common.sh@365 -- # decimal 1 00:05:05.679 20:28:20 version -- scripts/common.sh@353 -- # local d=1 00:05:05.679 20:28:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.679 20:28:20 version -- scripts/common.sh@355 -- # echo 1 00:05:05.679 20:28:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.679 20:28:20 version -- scripts/common.sh@366 -- # decimal 2 00:05:05.679 20:28:20 version -- scripts/common.sh@353 -- # local d=2 00:05:05.679 20:28:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.679 20:28:20 version -- scripts/common.sh@355 -- # echo 2 00:05:05.679 20:28:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.679 20:28:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.679 20:28:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.679 20:28:20 version -- scripts/common.sh@368 -- # return 0 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.679 --rc genhtml_branch_coverage=1 00:05:05.679 --rc genhtml_function_coverage=1 00:05:05.679 --rc genhtml_legend=1 00:05:05.679 --rc geninfo_all_blocks=1 00:05:05.679 --rc geninfo_unexecuted_blocks=1 00:05:05.679 00:05:05.679 ' 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.679 --rc genhtml_branch_coverage=1 00:05:05.679 --rc genhtml_function_coverage=1 00:05:05.679 --rc genhtml_legend=1 00:05:05.679 --rc geninfo_all_blocks=1 00:05:05.679 --rc geninfo_unexecuted_blocks=1 00:05:05.679 00:05:05.679 ' 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.679 --rc genhtml_branch_coverage=1 00:05:05.679 --rc genhtml_function_coverage=1 00:05:05.679 --rc genhtml_legend=1 00:05:05.679 --rc geninfo_all_blocks=1 00:05:05.679 --rc geninfo_unexecuted_blocks=1 00:05:05.679 00:05:05.679 ' 00:05:05.679 20:28:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.679 --rc genhtml_branch_coverage=1 00:05:05.679 --rc genhtml_function_coverage=1 00:05:05.679 --rc genhtml_legend=1 00:05:05.679 --rc geninfo_all_blocks=1 00:05:05.679 --rc geninfo_unexecuted_blocks=1 00:05:05.679 00:05:05.679 ' 00:05:05.679 20:28:20 version -- app/version.sh@17 -- # get_header_version major 00:05:05.679 20:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.679 20:28:20 version -- app/version.sh@17 -- # major=25 00:05:05.679 20:28:20 version -- app/version.sh@18 -- # get_header_version minor 00:05:05.679 20:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.679 20:28:20 version -- app/version.sh@18 -- # minor=1 00:05:05.679 20:28:20 version -- app/version.sh@19 -- # get_header_version patch 00:05:05.679 20:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.679 20:28:20 version -- app/version.sh@19 -- # patch=0 00:05:05.679 20:28:20 version -- app/version.sh@20 -- # get_header_version suffix 00:05:05.679 20:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:05.679 20:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.679 20:28:20 version -- app/version.sh@20 -- # suffix=-pre 00:05:05.679 20:28:20 version -- app/version.sh@22 -- # version=25.1 00:05:05.679 20:28:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:05.679 20:28:20 version -- app/version.sh@28 -- # version=25.1rc0 00:05:05.680 20:28:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:05.680 20:28:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:05.680 20:28:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:05.680 20:28:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:05.680 00:05:05.680 real 0m0.186s 00:05:05.680 user 0m0.114s 00:05:05.680 sys 0m0.101s 00:05:05.680 ************************************ 00:05:05.680 END TEST version 00:05:05.680 ************************************ 00:05:05.680 20:28:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.680 20:28:20 version -- common/autotest_common.sh@10 -- # set +x 00:05:05.680 20:28:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:05.680 20:28:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:05.680 20:28:20 -- spdk/autotest.sh@194 -- # uname -s 00:05:05.680 20:28:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:05.680 20:28:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:05.680 20:28:20 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:05.680 20:28:20 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:05.680 20:28:20 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:05.680 20:28:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.680 20:28:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.680 20:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.680 ************************************ 00:05:05.680 START TEST spdk_dd 00:05:05.680 ************************************ 00:05:05.680 20:28:20 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:05.941 * Looking for test storage... 00:05:05.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.941 --rc genhtml_branch_coverage=1 00:05:05.941 --rc genhtml_function_coverage=1 00:05:05.941 --rc genhtml_legend=1 00:05:05.941 --rc geninfo_all_blocks=1 00:05:05.941 --rc geninfo_unexecuted_blocks=1 00:05:05.941 00:05:05.941 ' 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.941 --rc genhtml_branch_coverage=1 00:05:05.941 --rc genhtml_function_coverage=1 00:05:05.941 --rc genhtml_legend=1 00:05:05.941 --rc geninfo_all_blocks=1 00:05:05.941 --rc geninfo_unexecuted_blocks=1 00:05:05.941 00:05:05.941 ' 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.941 --rc genhtml_branch_coverage=1 00:05:05.941 --rc genhtml_function_coverage=1 00:05:05.941 --rc genhtml_legend=1 00:05:05.941 --rc geninfo_all_blocks=1 00:05:05.941 --rc geninfo_unexecuted_blocks=1 00:05:05.941 00:05:05.941 ' 00:05:05.941 20:28:20 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.941 --rc genhtml_branch_coverage=1 00:05:05.941 --rc genhtml_function_coverage=1 00:05:05.941 --rc genhtml_legend=1 00:05:05.941 --rc geninfo_all_blocks=1 00:05:05.941 --rc geninfo_unexecuted_blocks=1 00:05:05.941 00:05:05.941 ' 00:05:05.941 20:28:20 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.941 20:28:20 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.941 20:28:20 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.941 20:28:20 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.941 20:28:20 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.941 20:28:20 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:05.941 20:28:20 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.941 20:28:20 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.204 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.204 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.204 20:28:20 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:06.204 20:28:20 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:06.204 20:28:20 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:06.204 20:28:20 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.204 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:06.205 20:28:20 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:06.206 * spdk_dd linked to liburing 00:05:06.206 20:28:20 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:06.206 20:28:20 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:06.206 20:28:20 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:06.206 20:28:20 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:06.206 20:28:20 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:06.206 20:28:20 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:06.206 20:28:20 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:06.206 20:28:20 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:06.206 20:28:20 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:06.206 20:28:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.206 20:28:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.206 20:28:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:06.206 ************************************ 00:05:06.206 START TEST spdk_dd_basic_rw 00:05:06.206 ************************************ 00:05:06.206 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:06.468 * Looking for test storage... 00:05:06.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.468 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.469 --rc genhtml_branch_coverage=1 00:05:06.469 --rc genhtml_function_coverage=1 00:05:06.469 --rc genhtml_legend=1 00:05:06.469 --rc geninfo_all_blocks=1 00:05:06.469 --rc geninfo_unexecuted_blocks=1 00:05:06.469 00:05:06.469 ' 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.469 --rc genhtml_branch_coverage=1 00:05:06.469 --rc genhtml_function_coverage=1 00:05:06.469 --rc genhtml_legend=1 00:05:06.469 --rc geninfo_all_blocks=1 00:05:06.469 --rc geninfo_unexecuted_blocks=1 00:05:06.469 00:05:06.469 ' 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.469 --rc genhtml_branch_coverage=1 00:05:06.469 --rc genhtml_function_coverage=1 00:05:06.469 --rc genhtml_legend=1 00:05:06.469 --rc geninfo_all_blocks=1 00:05:06.469 --rc geninfo_unexecuted_blocks=1 00:05:06.469 00:05:06.469 ' 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.469 --rc genhtml_branch_coverage=1 00:05:06.469 --rc genhtml_function_coverage=1 00:05:06.469 --rc genhtml_legend=1 00:05:06.469 --rc geninfo_all_blocks=1 00:05:06.469 --rc geninfo_unexecuted_blocks=1 00:05:06.469 00:05:06.469 ' 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:06.469 20:28:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:06.801 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:06.802 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:06.802 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:06.802 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:06.802 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:06.802 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 ************************************ 00:05:06.803 START TEST dd_bs_lt_native_bs 00:05:06.803 ************************************ 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:06.803 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:06.803 { 00:05:06.803 "subsystems": [ 00:05:06.803 { 00:05:06.803 "subsystem": "bdev", 00:05:06.803 "config": [ 00:05:06.803 { 00:05:06.803 "params": { 00:05:06.803 "trtype": "pcie", 00:05:06.803 "traddr": "0000:00:10.0", 00:05:06.803 "name": "Nvme0" 00:05:06.803 }, 00:05:06.803 "method": "bdev_nvme_attach_controller" 00:05:06.803 }, 00:05:06.803 { 00:05:06.803 "method": "bdev_wait_for_examine" 00:05:06.803 } 00:05:06.803 ] 00:05:06.803 } 00:05:06.803 ] 00:05:06.803 } 00:05:06.803 [2024-11-26 20:28:21.075534] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:06.803 [2024-11-26 20:28:21.075725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59085 ] 00:05:06.803 [2024-11-26 20:28:21.214622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.803 [2024-11-26 20:28:21.251025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.803 [2024-11-26 20:28:21.282862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.063 [2024-11-26 20:28:21.379523] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:07.063 [2024-11-26 20:28:21.379579] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.063 [2024-11-26 20:28:21.448176] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.063 ************************************ 00:05:07.063 END TEST dd_bs_lt_native_bs 00:05:07.063 ************************************ 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.063 00:05:07.063 real 0m0.452s 00:05:07.063 user 0m0.287s 00:05:07.063 sys 0m0.098s 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 ************************************ 00:05:07.063 START TEST dd_rw 00:05:07.063 ************************************ 00:05:07.063 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:07.064 20:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:07.634 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:07.634 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:07.634 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:07.634 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:07.634 { 00:05:07.634 "subsystems": [ 00:05:07.634 { 00:05:07.634 "subsystem": "bdev", 00:05:07.634 "config": [ 00:05:07.634 { 00:05:07.634 "params": { 00:05:07.634 "trtype": "pcie", 00:05:07.634 "traddr": "0000:00:10.0", 00:05:07.634 "name": "Nvme0" 00:05:07.634 }, 00:05:07.634 "method": "bdev_nvme_attach_controller" 00:05:07.634 }, 00:05:07.634 { 00:05:07.634 "method": "bdev_wait_for_examine" 00:05:07.634 } 00:05:07.634 ] 00:05:07.634 } 00:05:07.634 ] 00:05:07.634 } 00:05:07.634 [2024-11-26 20:28:22.071168] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:07.634 [2024-11-26 20:28:22.071426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:05:07.900 [2024-11-26 20:28:22.213574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.900 [2024-11-26 20:28:22.254859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.900 [2024-11-26 20:28:22.291896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.900  [2024-11-26T20:28:22.715Z] Copying: 60/60 [kB] (average 14 MBps) 00:05:08.160 00:05:08.160 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:08.160 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:08.160 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:08.160 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:08.160 { 00:05:08.160 "subsystems": [ 00:05:08.160 { 00:05:08.160 "subsystem": "bdev", 00:05:08.160 "config": [ 00:05:08.160 { 00:05:08.160 "params": { 00:05:08.160 "trtype": "pcie", 00:05:08.160 "traddr": "0000:00:10.0", 00:05:08.160 "name": "Nvme0" 00:05:08.160 }, 00:05:08.160 "method": "bdev_nvme_attach_controller" 00:05:08.160 }, 00:05:08.160 { 00:05:08.160 "method": "bdev_wait_for_examine" 00:05:08.160 } 00:05:08.160 ] 00:05:08.160 } 00:05:08.160 ] 00:05:08.160 } 00:05:08.160 [2024-11-26 20:28:22.545154] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:08.161 [2024-11-26 20:28:22.545218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59124 ] 00:05:08.161 [2024-11-26 20:28:22.681938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.422 [2024-11-26 20:28:22.722268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.422 [2024-11-26 20:28:22.755721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:08.422  [2024-11-26T20:28:22.977Z] Copying: 60/60 [kB] (average 7500 kBps) 00:05:08.422 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:08.684 20:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:08.684 [2024-11-26 20:28:23.015689] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:08.684 [2024-11-26 20:28:23.015871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59140 ] 00:05:08.684 { 00:05:08.684 "subsystems": [ 00:05:08.684 { 00:05:08.684 "subsystem": "bdev", 00:05:08.684 "config": [ 00:05:08.684 { 00:05:08.684 "params": { 00:05:08.684 "trtype": "pcie", 00:05:08.684 "traddr": "0000:00:10.0", 00:05:08.684 "name": "Nvme0" 00:05:08.684 }, 00:05:08.684 "method": "bdev_nvme_attach_controller" 00:05:08.684 }, 00:05:08.684 { 00:05:08.684 "method": "bdev_wait_for_examine" 00:05:08.684 } 00:05:08.684 ] 00:05:08.684 } 00:05:08.684 ] 00:05:08.684 } 00:05:08.684 [2024-11-26 20:28:23.156214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.684 [2024-11-26 20:28:23.199525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.945 [2024-11-26 20:28:23.239015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:08.945  [2024-11-26T20:28:23.500Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:08.945 00:05:08.945 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:08.945 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:08.945 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:08.945 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:08.945 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:08.945 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:08.945 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:09.519 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:09.519 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:09.520 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:09.520 20:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:09.520 [2024-11-26 20:28:23.866097] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:09.520 [2024-11-26 20:28:23.866165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ] 00:05:09.520 { 00:05:09.520 "subsystems": [ 00:05:09.520 { 00:05:09.520 "subsystem": "bdev", 00:05:09.520 "config": [ 00:05:09.520 { 00:05:09.520 "params": { 00:05:09.520 "trtype": "pcie", 00:05:09.520 "traddr": "0000:00:10.0", 00:05:09.520 "name": "Nvme0" 00:05:09.520 }, 00:05:09.520 "method": "bdev_nvme_attach_controller" 00:05:09.520 }, 00:05:09.520 { 00:05:09.520 "method": "bdev_wait_for_examine" 00:05:09.520 } 00:05:09.520 ] 00:05:09.520 } 00:05:09.520 ] 00:05:09.520 } 00:05:09.520 [2024-11-26 20:28:24.005964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.520 [2024-11-26 20:28:24.042543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.782 [2024-11-26 20:28:24.075427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.782  [2024-11-26T20:28:24.337Z] Copying: 60/60 [kB] (average 29 MBps) 00:05:09.782 00:05:09.782 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:09.782 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:09.782 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:09.782 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:09.782 { 00:05:09.782 "subsystems": [ 00:05:09.782 { 00:05:09.782 "subsystem": "bdev", 00:05:09.782 "config": [ 00:05:09.782 { 00:05:09.782 "params": { 00:05:09.782 "trtype": "pcie", 00:05:09.782 "traddr": "0000:00:10.0", 00:05:09.782 "name": "Nvme0" 00:05:09.782 }, 00:05:09.782 "method": "bdev_nvme_attach_controller" 00:05:09.782 }, 00:05:09.782 { 00:05:09.782 "method": "bdev_wait_for_examine" 00:05:09.782 } 00:05:09.782 ] 00:05:09.782 } 00:05:09.782 ] 00:05:09.782 } 00:05:09.782 [2024-11-26 20:28:24.330347] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:09.782 [2024-11-26 20:28:24.330432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59172 ] 00:05:10.043 [2024-11-26 20:28:24.480397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.043 [2024-11-26 20:28:24.519901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.043 [2024-11-26 20:28:24.554899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.308  [2024-11-26T20:28:24.863Z] Copying: 60/60 [kB] (average 29 MBps) 00:05:10.308 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:10.308 20:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:10.308 { 00:05:10.308 "subsystems": [ 00:05:10.308 { 00:05:10.308 "subsystem": "bdev", 00:05:10.308 "config": [ 00:05:10.308 { 00:05:10.308 "params": { 00:05:10.308 "trtype": "pcie", 00:05:10.308 "traddr": "0000:00:10.0", 00:05:10.308 "name": "Nvme0" 00:05:10.308 }, 00:05:10.308 "method": "bdev_nvme_attach_controller" 00:05:10.308 }, 00:05:10.308 { 00:05:10.308 "method": "bdev_wait_for_examine" 00:05:10.308 } 00:05:10.308 ] 00:05:10.308 } 00:05:10.308 ] 00:05:10.308 } 00:05:10.308 [2024-11-26 20:28:24.830119] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:10.308 [2024-11-26 20:28:24.830210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:05:10.575 [2024-11-26 20:28:24.976413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.576 [2024-11-26 20:28:25.024240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.576 [2024-11-26 20:28:25.075163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.837  [2024-11-26T20:28:25.392Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:10.837 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:10.837 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:11.409 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:11.409 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:11.409 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:11.409 20:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:11.409 [2024-11-26 20:28:25.830058] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:11.409 [2024-11-26 20:28:25.830124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:05:11.409 { 00:05:11.409 "subsystems": [ 00:05:11.409 { 00:05:11.409 "subsystem": "bdev", 00:05:11.409 "config": [ 00:05:11.409 { 00:05:11.409 "params": { 00:05:11.409 "trtype": "pcie", 00:05:11.409 "traddr": "0000:00:10.0", 00:05:11.409 "name": "Nvme0" 00:05:11.409 }, 00:05:11.409 "method": "bdev_nvme_attach_controller" 00:05:11.409 }, 00:05:11.409 { 00:05:11.409 "method": "bdev_wait_for_examine" 00:05:11.409 } 00:05:11.409 ] 00:05:11.409 } 00:05:11.409 ] 00:05:11.409 } 00:05:11.671 [2024-11-26 20:28:25.967688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.671 [2024-11-26 20:28:26.009906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.671 [2024-11-26 20:28:26.047295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.671  [2024-11-26T20:28:26.487Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:11.932 00:05:11.932 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:11.932 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:11.932 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:11.932 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:11.932 [2024-11-26 20:28:26.313854] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:11.932 [2024-11-26 20:28:26.313931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59220 ] 00:05:11.932 { 00:05:11.932 "subsystems": [ 00:05:11.932 { 00:05:11.932 "subsystem": "bdev", 00:05:11.932 "config": [ 00:05:11.932 { 00:05:11.932 "params": { 00:05:11.932 "trtype": "pcie", 00:05:11.932 "traddr": "0000:00:10.0", 00:05:11.932 "name": "Nvme0" 00:05:11.932 }, 00:05:11.932 "method": "bdev_nvme_attach_controller" 00:05:11.932 }, 00:05:11.932 { 00:05:11.932 "method": "bdev_wait_for_examine" 00:05:11.932 } 00:05:11.932 ] 00:05:11.932 } 00:05:11.932 ] 00:05:11.932 } 00:05:11.932 [2024-11-26 20:28:26.455117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.193 [2024-11-26 20:28:26.501776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.193 [2024-11-26 20:28:26.545868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.193  [2024-11-26T20:28:27.010Z] Copying: 56/56 [kB] (average 10 MBps) 00:05:12.455 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:12.455 20:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:12.455 [2024-11-26 20:28:26.833382] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:12.455 [2024-11-26 20:28:26.833454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59230 ] 00:05:12.455 { 00:05:12.455 "subsystems": [ 00:05:12.455 { 00:05:12.455 "subsystem": "bdev", 00:05:12.455 "config": [ 00:05:12.455 { 00:05:12.455 "params": { 00:05:12.455 "trtype": "pcie", 00:05:12.455 "traddr": "0000:00:10.0", 00:05:12.455 "name": "Nvme0" 00:05:12.455 }, 00:05:12.455 "method": "bdev_nvme_attach_controller" 00:05:12.455 }, 00:05:12.455 { 00:05:12.455 "method": "bdev_wait_for_examine" 00:05:12.455 } 00:05:12.455 ] 00:05:12.455 } 00:05:12.455 ] 00:05:12.455 } 00:05:12.455 [2024-11-26 20:28:26.974566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.717 [2024-11-26 20:28:27.021713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.717 [2024-11-26 20:28:27.068975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.717  [2024-11-26T20:28:27.533Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:12.978 00:05:12.978 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:12.978 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:12.978 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:12.978 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:12.978 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:12.978 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:12.978 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:13.238 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:13.238 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:13.238 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:13.238 20:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:13.500 [2024-11-26 20:28:27.799189] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:13.500 [2024-11-26 20:28:27.799259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59249 ] 00:05:13.500 { 00:05:13.500 "subsystems": [ 00:05:13.500 { 00:05:13.500 "subsystem": "bdev", 00:05:13.500 "config": [ 00:05:13.500 { 00:05:13.500 "params": { 00:05:13.500 "trtype": "pcie", 00:05:13.500 "traddr": "0000:00:10.0", 00:05:13.500 "name": "Nvme0" 00:05:13.500 }, 00:05:13.500 "method": "bdev_nvme_attach_controller" 00:05:13.500 }, 00:05:13.500 { 00:05:13.500 "method": "bdev_wait_for_examine" 00:05:13.500 } 00:05:13.500 ] 00:05:13.500 } 00:05:13.500 ] 00:05:13.500 } 00:05:13.500 [2024-11-26 20:28:27.938510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.500 [2024-11-26 20:28:27.975287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.500 [2024-11-26 20:28:28.007696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.761  [2024-11-26T20:28:28.317Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:13.762 00:05:13.762 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:13.762 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:13.762 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:13.762 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:13.762 { 00:05:13.762 "subsystems": [ 00:05:13.762 { 00:05:13.762 "subsystem": "bdev", 00:05:13.762 "config": [ 00:05:13.762 { 00:05:13.762 "params": { 00:05:13.762 "trtype": "pcie", 00:05:13.762 "traddr": "0000:00:10.0", 00:05:13.762 "name": "Nvme0" 00:05:13.762 }, 00:05:13.762 "method": "bdev_nvme_attach_controller" 00:05:13.762 }, 00:05:13.762 { 00:05:13.762 "method": "bdev_wait_for_examine" 00:05:13.762 } 00:05:13.762 ] 00:05:13.762 } 00:05:13.762 ] 00:05:13.762 } 00:05:13.762 [2024-11-26 20:28:28.273465] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:13.762 [2024-11-26 20:28:28.273578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:05:14.023 [2024-11-26 20:28:28.420622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.023 [2024-11-26 20:28:28.460550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.023 [2024-11-26 20:28:28.497825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.285  [2024-11-26T20:28:28.840Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:14.285 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:14.285 20:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:14.285 [2024-11-26 20:28:28.785962] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:14.285 [2024-11-26 20:28:28.786047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59278 ] 00:05:14.285 { 00:05:14.285 "subsystems": [ 00:05:14.285 { 00:05:14.285 "subsystem": "bdev", 00:05:14.285 "config": [ 00:05:14.285 { 00:05:14.285 "params": { 00:05:14.285 "trtype": "pcie", 00:05:14.285 "traddr": "0000:00:10.0", 00:05:14.285 "name": "Nvme0" 00:05:14.285 }, 00:05:14.285 "method": "bdev_nvme_attach_controller" 00:05:14.285 }, 00:05:14.285 { 00:05:14.285 "method": "bdev_wait_for_examine" 00:05:14.285 } 00:05:14.285 ] 00:05:14.285 } 00:05:14.285 ] 00:05:14.285 } 00:05:14.547 [2024-11-26 20:28:28.921368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.547 [2024-11-26 20:28:28.969683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.547 [2024-11-26 20:28:29.014035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.808  [2024-11-26T20:28:29.363Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:14.808 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:14.808 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:15.379 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:15.379 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:15.379 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:15.379 20:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:15.379 [2024-11-26 20:28:29.679710] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:15.379 [2024-11-26 20:28:29.679790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:05:15.379 { 00:05:15.379 "subsystems": [ 00:05:15.379 { 00:05:15.379 "subsystem": "bdev", 00:05:15.379 "config": [ 00:05:15.379 { 00:05:15.379 "params": { 00:05:15.379 "trtype": "pcie", 00:05:15.379 "traddr": "0000:00:10.0", 00:05:15.379 "name": "Nvme0" 00:05:15.379 }, 00:05:15.379 "method": "bdev_nvme_attach_controller" 00:05:15.379 }, 00:05:15.379 { 00:05:15.379 "method": "bdev_wait_for_examine" 00:05:15.379 } 00:05:15.379 ] 00:05:15.379 } 00:05:15.379 ] 00:05:15.379 } 00:05:15.379 [2024-11-26 20:28:29.819515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.379 [2024-11-26 20:28:29.869118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.379 [2024-11-26 20:28:29.911450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.639  [2024-11-26T20:28:30.194Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:15.639 00:05:15.639 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:15.639 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:15.639 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:15.639 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:15.898 [2024-11-26 20:28:30.212081] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:15.898 [2024-11-26 20:28:30.212170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59311 ] 00:05:15.898 { 00:05:15.898 "subsystems": [ 00:05:15.898 { 00:05:15.898 "subsystem": "bdev", 00:05:15.898 "config": [ 00:05:15.898 { 00:05:15.898 "params": { 00:05:15.898 "trtype": "pcie", 00:05:15.898 "traddr": "0000:00:10.0", 00:05:15.898 "name": "Nvme0" 00:05:15.898 }, 00:05:15.898 "method": "bdev_nvme_attach_controller" 00:05:15.898 }, 00:05:15.898 { 00:05:15.898 "method": "bdev_wait_for_examine" 00:05:15.898 } 00:05:15.898 ] 00:05:15.898 } 00:05:15.898 ] 00:05:15.898 } 00:05:15.898 [2024-11-26 20:28:30.351541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.898 [2024-11-26 20:28:30.405496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.158 [2024-11-26 20:28:30.457125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.158  [2024-11-26T20:28:30.973Z] Copying: 48/48 [kB] (average 15 MBps) 00:05:16.418 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:16.418 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:16.419 20:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:16.419 { 00:05:16.419 "subsystems": [ 00:05:16.419 { 00:05:16.419 "subsystem": "bdev", 00:05:16.419 "config": [ 00:05:16.419 { 00:05:16.419 "params": { 00:05:16.419 "trtype": "pcie", 00:05:16.419 "traddr": "0000:00:10.0", 00:05:16.419 "name": "Nvme0" 00:05:16.419 }, 00:05:16.419 "method": "bdev_nvme_attach_controller" 00:05:16.419 }, 00:05:16.419 { 00:05:16.419 "method": "bdev_wait_for_examine" 00:05:16.419 } 00:05:16.419 ] 00:05:16.419 } 00:05:16.419 ] 00:05:16.419 } 00:05:16.419 [2024-11-26 20:28:30.800368] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:16.419 [2024-11-26 20:28:30.800457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:05:16.419 [2024-11-26 20:28:30.939959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.679 [2024-11-26 20:28:30.992981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.679 [2024-11-26 20:28:31.050098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.679  [2024-11-26T20:28:31.497Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:16.942 00:05:16.942 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:16.942 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:16.942 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:16.942 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:16.942 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:16.942 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:16.942 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:17.515 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:17.515 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:17.515 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:17.515 20:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:17.515 { 00:05:17.515 "subsystems": [ 00:05:17.515 { 00:05:17.515 "subsystem": "bdev", 00:05:17.515 "config": [ 00:05:17.515 { 00:05:17.515 "params": { 00:05:17.515 "trtype": "pcie", 00:05:17.515 "traddr": "0000:00:10.0", 00:05:17.515 "name": "Nvme0" 00:05:17.515 }, 00:05:17.515 "method": "bdev_nvme_attach_controller" 00:05:17.515 }, 00:05:17.515 { 00:05:17.515 "method": "bdev_wait_for_examine" 00:05:17.515 } 00:05:17.515 ] 00:05:17.515 } 00:05:17.515 ] 00:05:17.515 } 00:05:17.515 [2024-11-26 20:28:31.815751] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:17.515 [2024-11-26 20:28:31.815840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59345 ] 00:05:17.515 [2024-11-26 20:28:31.956118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.515 [2024-11-26 20:28:32.024174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.777 [2024-11-26 20:28:32.098047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.777  [2024-11-26T20:28:32.594Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:18.039 00:05:18.039 20:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:18.039 20:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:18.039 20:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:18.039 20:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:18.039 { 00:05:18.039 "subsystems": [ 00:05:18.039 { 00:05:18.039 "subsystem": "bdev", 00:05:18.039 "config": [ 00:05:18.039 { 00:05:18.039 "params": { 00:05:18.039 "trtype": "pcie", 00:05:18.039 "traddr": "0000:00:10.0", 00:05:18.039 "name": "Nvme0" 00:05:18.039 }, 00:05:18.039 "method": "bdev_nvme_attach_controller" 00:05:18.039 }, 00:05:18.039 { 00:05:18.039 "method": "bdev_wait_for_examine" 00:05:18.039 } 00:05:18.039 ] 00:05:18.039 } 00:05:18.039 ] 00:05:18.039 } 00:05:18.039 [2024-11-26 20:28:32.459882] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:18.039 [2024-11-26 20:28:32.459951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:05:18.300 [2024-11-26 20:28:32.602573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.300 [2024-11-26 20:28:32.660709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.300 [2024-11-26 20:28:32.720572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.300  [2024-11-26T20:28:33.118Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:18.563 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:18.563 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:18.563 [2024-11-26 20:28:33.079330] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:18.563 [2024-11-26 20:28:33.079424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59374 ] 00:05:18.563 { 00:05:18.563 "subsystems": [ 00:05:18.563 { 00:05:18.563 "subsystem": "bdev", 00:05:18.563 "config": [ 00:05:18.563 { 00:05:18.563 "params": { 00:05:18.563 "trtype": "pcie", 00:05:18.563 "traddr": "0000:00:10.0", 00:05:18.563 "name": "Nvme0" 00:05:18.563 }, 00:05:18.563 "method": "bdev_nvme_attach_controller" 00:05:18.563 }, 00:05:18.563 { 00:05:18.563 "method": "bdev_wait_for_examine" 00:05:18.563 } 00:05:18.563 ] 00:05:18.563 } 00:05:18.563 ] 00:05:18.563 } 00:05:18.825 [2024-11-26 20:28:33.217297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.825 [2024-11-26 20:28:33.277895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.825 [2024-11-26 20:28:33.338205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.193  [2024-11-26T20:28:33.748Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:19.193 00:05:19.193 ************************************ 00:05:19.193 END TEST dd_rw 00:05:19.193 ************************************ 00:05:19.193 00:05:19.193 real 0m12.096s 00:05:19.193 user 0m8.465s 00:05:19.193 sys 0m4.298s 00:05:19.193 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.193 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:19.455 ************************************ 00:05:19.455 START TEST dd_rw_offset 00:05:19.455 ************************************ 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=b25u2icku8kufmy3nc19861ed30rwd5t3nbc79aym7hunufbrcrl3h1aunxf8rtf2tiknhuyvwtkb05rqpykj9782fuzzico3zlkpw6bk5baiwiuzo3v7zb5sqcbx4aao33c5qe80rtegca0huof8huesnt7v1qag3w4aqcb1bq78ugg73dew2rtb3lma6b385e7e3qbcysq6w41g4rtcoxu2tssk0pb3xbreczykdy3uw0istkc14e7h2xnrkyyn6f8dmrtp8y398dczcgpx4op617q1chbk7pf7gir1k450caa7tbfuf4tabk1cu79b9ub62jsg1utslr0ji0qatkpj8ukcdhhm0yxg3e28b1jzvhbx0hnbuux2b7v37itudbikko55s1fa3is0mamc2ei0pz2bbgj7dp1gg2v8ec7htaibue4a6t2y0m9bxuh22tu394ypcps90ealrj7gep885inr5zfpqvux5freuwgyerdub0vg9yb99q446gma9sikl2rnmja6mfdgvr4y14oandsn20loleklfir13t72164e3tfj0xf2oevldbicuorsjtb34iqdbst9u54fkkmxkf6262d1fjueawnblc7syptjvl1prq1xehl72xo37eunfw5x2oj70mym2qgxnzjktqsiikb4ztlelgdonojeou59wd8bgdd7uvxxtkbt6gaeemim78y91z571n9djsphc9qwevx7ojwbpdx3x22inffuz7srenuow3yfa74scrtyeoiht1iyum9g3ir2c3hqmur20n4p3v3q0i2r5bo56g77tm18tr7rmmz5plh957utash7g42y5a71w54xfjjnffpi94tgg0ox1ox15dd7wxifi204x749xlvkhj9fplt16x4ndhdtb7172kbqtdunaa8qutj3p53da36rhsdvmp1gnm8axfggg45icleq4y47qyq1lt917xgtrdhmfq750vf656k0t5zkv9j9vb5nylnjcg0gujnbze1x18e0zrtkjix7w7nymupvqurce0uwyqo2bh0h888u35casjsqmfy4sx60y3b4vqt7wx59j8gtjk1kme0xny7bks6uhagpn2kescbctl5wy6pz2h4t3i1avn3658sl4d6sgc9nnznzzyapdmcebvgefw13qk3rrb8rp2mmhqh7w6fb73gion8uul77cda2ykiliy4u0fego1w4jycnkw4ugcx9g2f6szh9cnx2mb0j1bwc7iqalmhkhx6s0zyeuc63w3rulkhwyx952tmh2saum54mcdu1u8klvkki3yl3y3nh4w6omm9fimvshpqzjscoq8p9ra2tngk7hulg7vcxuo3zv8fdhi8anf775gfysexgexu8a1p3dzf3z7wolfaqvfh5gyq6s290hjlt7895fvr72n6jmrfkbowd8yzst9oc62gu7lnqorj3ndhw2ighmvt94izshka2ajw28hivv8urwlsvm3jbc4y4axl4m8jkbfspddixhn8fg3iup5k7ia3mez4hyp4h1du6kknlmqhmntn5g2402m86wyttjzjy1gchiaa598qut99vsamsdyxhm0osqyhkmv6qcfuiz1jeg50xgkwrd9nsz5i86en8vwmhpkjaxoc5yuj1qtqht54sdfsd3h9vmqpsswrua4n34fsgu7wdmsqwm5o4nlg4lk8ttqjuh6mql7k865jdbxeceio0zhs8uwugic4x5grrv8lhg0f7bdx6chpfjlqp52z2dz6sp30573bibi7cy0sscqifb2ed8k1zm212cygvec5cla1x58w5y7nvhbzszxmclsiidl7brontrtzfimq4sxfqru2obna8erytren6urh27u3e6bftqjyhk8u22qymt80if9r2lg7zhetzhrg64xlyck2y5qrh16wptbtq1gw7y2ga2uca15wrwnj3meal1ntf0o4sqee1upn97kz052ngbva8zwimo9wh5ttwtbfwp5kbl4h853shcasrr1i6npu8omyeliof2e9j7fpct5nxhknakoldmog1rt7ngwwremji2sj8xm9tw4z4ehmz5aaxfkh1ysr4t788vhuisgdw87q887h2nru4wieftteok1u7rdqg17dn29hf5tk786p1lyl5jddngs9047ozw4nbov3ig3i8y160q36dxsc6kjwn75vka7oms34ln8rq055hmglh7lvrwj6idmbdh7yplsty9m2rbu388xjy7l0l5ibuo3viyruzvrbzswyioxu4l8yu3gr3awq5a70pu601yj5100922zcahthc8zppvy5lb0cma3fwsjn07rke4pi8tx6ilb3rzhs5kbcujmwpkw88al365q7f8407f0aggvmy5dbkhroft5l2p1e6qfc9b0yphl09fm5mb734bq3wp51ex0ycspit2vh9uazgg3eniyz7j74wayex2byikb6mapxwvm1q8g33eidjappxunawkwj1vhht913c6eh43p45dahfnnkevcljcv9zni4zya6ttyntm2k2tr3vhin1onpmj4fuxqkjh6mwib2ao1uujlop7h34nlf4v81dpl4iy0si8h0og7b6sxvh84xrv285az53fo61d6c0v1869qw82ioynuih68tqaxjtalx5eim2jtiohc94j3lutmosmwrqwalksuh0ypfzyjj9eknle7ibvazlju2qfs2wiibk4zwl8o4muksvcd878ee8xi4vlpkfqsell4a8w59pe72bfjkpn853mjetz1dmdikbdgagn1psmtawvoe1ns15qzbil5eqc7e4mjpfckbsetud807vwg8pmsmc07dc9nc26bx7v52j8nqy5qs7ex1k3xn75pr7txm11q9p7v791ec8quho0nqrer5bz0e3id26v03ygl508zk1x35bvo0r7n56vkq23d3vs6h7yq8oe3ygmxmj9tbazkz68iuf0d5b45b55hh6ud6f4rvwxrnxmg1budd2oljlxlzbr10csz86zmhjceht03cktmteof2ywohtkt3zijanfyh5c55unml484cxiv261d73a041e1w9na2g5yw6gou6vf8cfq7mz188b5zpwtecnm3a0003pa3xz4b76zz1sd6ls3re3dyn3vapyfwdkv1e5g2cowmgslo74vsdn19tijfyfzfg2qhftgr05eypmgcqy0dryld7stdq1macua8k8bz752o58nlnd4zr693k45myjpmmmwf4vdhczwx1u3mtrd75zfy5e5pmske3nfm8n52w9ggtjsiwafgrsemet7poq3rj4mu0hmnytbkdgffctm89ovtvzyi9o8gleg87q536fz5snxjpqtgdh5fatkmfhc6abprf8kbzozad914e0qvzqeyi2upas2s5r7actanqjhnj8yftr3mr1302wrtpdirr8c49etrqr2auxsf3l8lpsbxwc8jvdq3lgf32oonh52grepykh3kov3mo3yuwbe00bt1qos7gr1gdlg6vqq2is7tbrik6zd7m421fwmk7bwp9kj9vwpb5644pmjwfh0heow7jrlw0f2ng52vtiiymwc0glzrlge15t5881dpw7u7484vp5jfqjli9oon13kd401e3tanzvbuwt0sryw6xuxzrfg5r9032twakg2hbtahwmwsi89drrjeb6omyvo9b3qyti1la4ro1qb5vqksw4085e4soky4o07r3fut3jgktotx6p8skwzbv6fxjk6bftyxw8y7rv0mvdv1h301alj717w4ly8chp9ffe2m3l2yu8yd5io9zb4ltwdac2jsxpe9olpbfx2bknkt6h4lcacggykxsio38w9c0nul7zf2hhhtdi4s8wo4y91yns85ux0n7b533usbnrcwxk21l35lp2t8qqvh7uperozbxyimr9r7x0jvh93txcxulp46ch4831bew6gveu6ai30evlfrmqndp5h4gpqim8vsa3q3a244v5mo0vvg5202y6myk0ei69jra1jsv0d0e8ea1pelg2h2nb3p8n96etvrco19w0571uyeydr3bx5u 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:19.455 20:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:19.455 [2024-11-26 20:28:33.789543] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:19.455 [2024-11-26 20:28:33.789676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59410 ] 00:05:19.455 { 00:05:19.455 "subsystems": [ 00:05:19.455 { 00:05:19.455 "subsystem": "bdev", 00:05:19.455 "config": [ 00:05:19.455 { 00:05:19.455 "params": { 00:05:19.455 "trtype": "pcie", 00:05:19.455 "traddr": "0000:00:10.0", 00:05:19.455 "name": "Nvme0" 00:05:19.455 }, 00:05:19.455 "method": "bdev_nvme_attach_controller" 00:05:19.455 }, 00:05:19.455 { 00:05:19.455 "method": "bdev_wait_for_examine" 00:05:19.455 } 00:05:19.455 ] 00:05:19.455 } 00:05:19.455 ] 00:05:19.455 } 00:05:19.455 [2024-11-26 20:28:33.930768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.455 [2024-11-26 20:28:33.989121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.715 [2024-11-26 20:28:34.050333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.715  [2024-11-26T20:28:34.530Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:19.975 00:05:19.975 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:19.975 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:19.975 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:19.975 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:19.975 [2024-11-26 20:28:34.389621] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:19.975 [2024-11-26 20:28:34.389701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59424 ] 00:05:19.975 { 00:05:19.975 "subsystems": [ 00:05:19.975 { 00:05:19.975 "subsystem": "bdev", 00:05:19.975 "config": [ 00:05:19.975 { 00:05:19.975 "params": { 00:05:19.975 "trtype": "pcie", 00:05:19.975 "traddr": "0000:00:10.0", 00:05:19.975 "name": "Nvme0" 00:05:19.975 }, 00:05:19.975 "method": "bdev_nvme_attach_controller" 00:05:19.975 }, 00:05:19.975 { 00:05:19.975 "method": "bdev_wait_for_examine" 00:05:19.975 } 00:05:19.975 ] 00:05:19.975 } 00:05:19.975 ] 00:05:19.975 } 00:05:20.237 [2024-11-26 20:28:34.528576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.237 [2024-11-26 20:28:34.589850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.237 [2024-11-26 20:28:34.651845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.237  [2024-11-26T20:28:35.054Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:20.499 00:05:20.499 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:20.500 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ b25u2icku8kufmy3nc19861ed30rwd5t3nbc79aym7hunufbrcrl3h1aunxf8rtf2tiknhuyvwtkb05rqpykj9782fuzzico3zlkpw6bk5baiwiuzo3v7zb5sqcbx4aao33c5qe80rtegca0huof8huesnt7v1qag3w4aqcb1bq78ugg73dew2rtb3lma6b385e7e3qbcysq6w41g4rtcoxu2tssk0pb3xbreczykdy3uw0istkc14e7h2xnrkyyn6f8dmrtp8y398dczcgpx4op617q1chbk7pf7gir1k450caa7tbfuf4tabk1cu79b9ub62jsg1utslr0ji0qatkpj8ukcdhhm0yxg3e28b1jzvhbx0hnbuux2b7v37itudbikko55s1fa3is0mamc2ei0pz2bbgj7dp1gg2v8ec7htaibue4a6t2y0m9bxuh22tu394ypcps90ealrj7gep885inr5zfpqvux5freuwgyerdub0vg9yb99q446gma9sikl2rnmja6mfdgvr4y14oandsn20loleklfir13t72164e3tfj0xf2oevldbicuorsjtb34iqdbst9u54fkkmxkf6262d1fjueawnblc7syptjvl1prq1xehl72xo37eunfw5x2oj70mym2qgxnzjktqsiikb4ztlelgdonojeou59wd8bgdd7uvxxtkbt6gaeemim78y91z571n9djsphc9qwevx7ojwbpdx3x22inffuz7srenuow3yfa74scrtyeoiht1iyum9g3ir2c3hqmur20n4p3v3q0i2r5bo56g77tm18tr7rmmz5plh957utash7g42y5a71w54xfjjnffpi94tgg0ox1ox15dd7wxifi204x749xlvkhj9fplt16x4ndhdtb7172kbqtdunaa8qutj3p53da36rhsdvmp1gnm8axfggg45icleq4y47qyq1lt917xgtrdhmfq750vf656k0t5zkv9j9vb5nylnjcg0gujnbze1x18e0zrtkjix7w7nymupvqurce0uwyqo2bh0h888u35casjsqmfy4sx60y3b4vqt7wx59j8gtjk1kme0xny7bks6uhagpn2kescbctl5wy6pz2h4t3i1avn3658sl4d6sgc9nnznzzyapdmcebvgefw13qk3rrb8rp2mmhqh7w6fb73gion8uul77cda2ykiliy4u0fego1w4jycnkw4ugcx9g2f6szh9cnx2mb0j1bwc7iqalmhkhx6s0zyeuc63w3rulkhwyx952tmh2saum54mcdu1u8klvkki3yl3y3nh4w6omm9fimvshpqzjscoq8p9ra2tngk7hulg7vcxuo3zv8fdhi8anf775gfysexgexu8a1p3dzf3z7wolfaqvfh5gyq6s290hjlt7895fvr72n6jmrfkbowd8yzst9oc62gu7lnqorj3ndhw2ighmvt94izshka2ajw28hivv8urwlsvm3jbc4y4axl4m8jkbfspddixhn8fg3iup5k7ia3mez4hyp4h1du6kknlmqhmntn5g2402m86wyttjzjy1gchiaa598qut99vsamsdyxhm0osqyhkmv6qcfuiz1jeg50xgkwrd9nsz5i86en8vwmhpkjaxoc5yuj1qtqht54sdfsd3h9vmqpsswrua4n34fsgu7wdmsqwm5o4nlg4lk8ttqjuh6mql7k865jdbxeceio0zhs8uwugic4x5grrv8lhg0f7bdx6chpfjlqp52z2dz6sp30573bibi7cy0sscqifb2ed8k1zm212cygvec5cla1x58w5y7nvhbzszxmclsiidl7brontrtzfimq4sxfqru2obna8erytren6urh27u3e6bftqjyhk8u22qymt80if9r2lg7zhetzhrg64xlyck2y5qrh16wptbtq1gw7y2ga2uca15wrwnj3meal1ntf0o4sqee1upn97kz052ngbva8zwimo9wh5ttwtbfwp5kbl4h853shcasrr1i6npu8omyeliof2e9j7fpct5nxhknakoldmog1rt7ngwwremji2sj8xm9tw4z4ehmz5aaxfkh1ysr4t788vhuisgdw87q887h2nru4wieftteok1u7rdqg17dn29hf5tk786p1lyl5jddngs9047ozw4nbov3ig3i8y160q36dxsc6kjwn75vka7oms34ln8rq055hmglh7lvrwj6idmbdh7yplsty9m2rbu388xjy7l0l5ibuo3viyruzvrbzswyioxu4l8yu3gr3awq5a70pu601yj5100922zcahthc8zppvy5lb0cma3fwsjn07rke4pi8tx6ilb3rzhs5kbcujmwpkw88al365q7f8407f0aggvmy5dbkhroft5l2p1e6qfc9b0yphl09fm5mb734bq3wp51ex0ycspit2vh9uazgg3eniyz7j74wayex2byikb6mapxwvm1q8g33eidjappxunawkwj1vhht913c6eh43p45dahfnnkevcljcv9zni4zya6ttyntm2k2tr3vhin1onpmj4fuxqkjh6mwib2ao1uujlop7h34nlf4v81dpl4iy0si8h0og7b6sxvh84xrv285az53fo61d6c0v1869qw82ioynuih68tqaxjtalx5eim2jtiohc94j3lutmosmwrqwalksuh0ypfzyjj9eknle7ibvazlju2qfs2wiibk4zwl8o4muksvcd878ee8xi4vlpkfqsell4a8w59pe72bfjkpn853mjetz1dmdikbdgagn1psmtawvoe1ns15qzbil5eqc7e4mjpfckbsetud807vwg8pmsmc07dc9nc26bx7v52j8nqy5qs7ex1k3xn75pr7txm11q9p7v791ec8quho0nqrer5bz0e3id26v03ygl508zk1x35bvo0r7n56vkq23d3vs6h7yq8oe3ygmxmj9tbazkz68iuf0d5b45b55hh6ud6f4rvwxrnxmg1budd2oljlxlzbr10csz86zmhjceht03cktmteof2ywohtkt3zijanfyh5c55unml484cxiv261d73a041e1w9na2g5yw6gou6vf8cfq7mz188b5zpwtecnm3a0003pa3xz4b76zz1sd6ls3re3dyn3vapyfwdkv1e5g2cowmgslo74vsdn19tijfyfzfg2qhftgr05eypmgcqy0dryld7stdq1macua8k8bz752o58nlnd4zr693k45myjpmmmwf4vdhczwx1u3mtrd75zfy5e5pmske3nfm8n52w9ggtjsiwafgrsemet7poq3rj4mu0hmnytbkdgffctm89ovtvzyi9o8gleg87q536fz5snxjpqtgdh5fatkmfhc6abprf8kbzozad914e0qvzqeyi2upas2s5r7actanqjhnj8yftr3mr1302wrtpdirr8c49etrqr2auxsf3l8lpsbxwc8jvdq3lgf32oonh52grepykh3kov3mo3yuwbe00bt1qos7gr1gdlg6vqq2is7tbrik6zd7m421fwmk7bwp9kj9vwpb5644pmjwfh0heow7jrlw0f2ng52vtiiymwc0glzrlge15t5881dpw7u7484vp5jfqjli9oon13kd401e3tanzvbuwt0sryw6xuxzrfg5r9032twakg2hbtahwmwsi89drrjeb6omyvo9b3qyti1la4ro1qb5vqksw4085e4soky4o07r3fut3jgktotx6p8skwzbv6fxjk6bftyxw8y7rv0mvdv1h301alj717w4ly8chp9ffe2m3l2yu8yd5io9zb4ltwdac2jsxpe9olpbfx2bknkt6h4lcacggykxsio38w9c0nul7zf2hhhtdi4s8wo4y91yns85ux0n7b533usbnrcwxk21l35lp2t8qqvh7uperozbxyimr9r7x0jvh93txcxulp46ch4831bew6gveu6ai30evlfrmqndp5h4gpqim8vsa3q3a244v5mo0vvg5202y6myk0ei69jra1jsv0d0e8ea1pelg2h2nb3p8n96etvrco19w0571uyeydr3bx5u == \b\2\5\u\2\i\c\k\u\8\k\u\f\m\y\3\n\c\1\9\8\6\1\e\d\3\0\r\w\d\5\t\3\n\b\c\7\9\a\y\m\7\h\u\n\u\f\b\r\c\r\l\3\h\1\a\u\n\x\f\8\r\t\f\2\t\i\k\n\h\u\y\v\w\t\k\b\0\5\r\q\p\y\k\j\9\7\8\2\f\u\z\z\i\c\o\3\z\l\k\p\w\6\b\k\5\b\a\i\w\i\u\z\o\3\v\7\z\b\5\s\q\c\b\x\4\a\a\o\3\3\c\5\q\e\8\0\r\t\e\g\c\a\0\h\u\o\f\8\h\u\e\s\n\t\7\v\1\q\a\g\3\w\4\a\q\c\b\1\b\q\7\8\u\g\g\7\3\d\e\w\2\r\t\b\3\l\m\a\6\b\3\8\5\e\7\e\3\q\b\c\y\s\q\6\w\4\1\g\4\r\t\c\o\x\u\2\t\s\s\k\0\p\b\3\x\b\r\e\c\z\y\k\d\y\3\u\w\0\i\s\t\k\c\1\4\e\7\h\2\x\n\r\k\y\y\n\6\f\8\d\m\r\t\p\8\y\3\9\8\d\c\z\c\g\p\x\4\o\p\6\1\7\q\1\c\h\b\k\7\p\f\7\g\i\r\1\k\4\5\0\c\a\a\7\t\b\f\u\f\4\t\a\b\k\1\c\u\7\9\b\9\u\b\6\2\j\s\g\1\u\t\s\l\r\0\j\i\0\q\a\t\k\p\j\8\u\k\c\d\h\h\m\0\y\x\g\3\e\2\8\b\1\j\z\v\h\b\x\0\h\n\b\u\u\x\2\b\7\v\3\7\i\t\u\d\b\i\k\k\o\5\5\s\1\f\a\3\i\s\0\m\a\m\c\2\e\i\0\p\z\2\b\b\g\j\7\d\p\1\g\g\2\v\8\e\c\7\h\t\a\i\b\u\e\4\a\6\t\2\y\0\m\9\b\x\u\h\2\2\t\u\3\9\4\y\p\c\p\s\9\0\e\a\l\r\j\7\g\e\p\8\8\5\i\n\r\5\z\f\p\q\v\u\x\5\f\r\e\u\w\g\y\e\r\d\u\b\0\v\g\9\y\b\9\9\q\4\4\6\g\m\a\9\s\i\k\l\2\r\n\m\j\a\6\m\f\d\g\v\r\4\y\1\4\o\a\n\d\s\n\2\0\l\o\l\e\k\l\f\i\r\1\3\t\7\2\1\6\4\e\3\t\f\j\0\x\f\2\o\e\v\l\d\b\i\c\u\o\r\s\j\t\b\3\4\i\q\d\b\s\t\9\u\5\4\f\k\k\m\x\k\f\6\2\6\2\d\1\f\j\u\e\a\w\n\b\l\c\7\s\y\p\t\j\v\l\1\p\r\q\1\x\e\h\l\7\2\x\o\3\7\e\u\n\f\w\5\x\2\o\j\7\0\m\y\m\2\q\g\x\n\z\j\k\t\q\s\i\i\k\b\4\z\t\l\e\l\g\d\o\n\o\j\e\o\u\5\9\w\d\8\b\g\d\d\7\u\v\x\x\t\k\b\t\6\g\a\e\e\m\i\m\7\8\y\9\1\z\5\7\1\n\9\d\j\s\p\h\c\9\q\w\e\v\x\7\o\j\w\b\p\d\x\3\x\2\2\i\n\f\f\u\z\7\s\r\e\n\u\o\w\3\y\f\a\7\4\s\c\r\t\y\e\o\i\h\t\1\i\y\u\m\9\g\3\i\r\2\c\3\h\q\m\u\r\2\0\n\4\p\3\v\3\q\0\i\2\r\5\b\o\5\6\g\7\7\t\m\1\8\t\r\7\r\m\m\z\5\p\l\h\9\5\7\u\t\a\s\h\7\g\4\2\y\5\a\7\1\w\5\4\x\f\j\j\n\f\f\p\i\9\4\t\g\g\0\o\x\1\o\x\1\5\d\d\7\w\x\i\f\i\2\0\4\x\7\4\9\x\l\v\k\h\j\9\f\p\l\t\1\6\x\4\n\d\h\d\t\b\7\1\7\2\k\b\q\t\d\u\n\a\a\8\q\u\t\j\3\p\5\3\d\a\3\6\r\h\s\d\v\m\p\1\g\n\m\8\a\x\f\g\g\g\4\5\i\c\l\e\q\4\y\4\7\q\y\q\1\l\t\9\1\7\x\g\t\r\d\h\m\f\q\7\5\0\v\f\6\5\6\k\0\t\5\z\k\v\9\j\9\v\b\5\n\y\l\n\j\c\g\0\g\u\j\n\b\z\e\1\x\1\8\e\0\z\r\t\k\j\i\x\7\w\7\n\y\m\u\p\v\q\u\r\c\e\0\u\w\y\q\o\2\b\h\0\h\8\8\8\u\3\5\c\a\s\j\s\q\m\f\y\4\s\x\6\0\y\3\b\4\v\q\t\7\w\x\5\9\j\8\g\t\j\k\1\k\m\e\0\x\n\y\7\b\k\s\6\u\h\a\g\p\n\2\k\e\s\c\b\c\t\l\5\w\y\6\p\z\2\h\4\t\3\i\1\a\v\n\3\6\5\8\s\l\4\d\6\s\g\c\9\n\n\z\n\z\z\y\a\p\d\m\c\e\b\v\g\e\f\w\1\3\q\k\3\r\r\b\8\r\p\2\m\m\h\q\h\7\w\6\f\b\7\3\g\i\o\n\8\u\u\l\7\7\c\d\a\2\y\k\i\l\i\y\4\u\0\f\e\g\o\1\w\4\j\y\c\n\k\w\4\u\g\c\x\9\g\2\f\6\s\z\h\9\c\n\x\2\m\b\0\j\1\b\w\c\7\i\q\a\l\m\h\k\h\x\6\s\0\z\y\e\u\c\6\3\w\3\r\u\l\k\h\w\y\x\9\5\2\t\m\h\2\s\a\u\m\5\4\m\c\d\u\1\u\8\k\l\v\k\k\i\3\y\l\3\y\3\n\h\4\w\6\o\m\m\9\f\i\m\v\s\h\p\q\z\j\s\c\o\q\8\p\9\r\a\2\t\n\g\k\7\h\u\l\g\7\v\c\x\u\o\3\z\v\8\f\d\h\i\8\a\n\f\7\7\5\g\f\y\s\e\x\g\e\x\u\8\a\1\p\3\d\z\f\3\z\7\w\o\l\f\a\q\v\f\h\5\g\y\q\6\s\2\9\0\h\j\l\t\7\8\9\5\f\v\r\7\2\n\6\j\m\r\f\k\b\o\w\d\8\y\z\s\t\9\o\c\6\2\g\u\7\l\n\q\o\r\j\3\n\d\h\w\2\i\g\h\m\v\t\9\4\i\z\s\h\k\a\2\a\j\w\2\8\h\i\v\v\8\u\r\w\l\s\v\m\3\j\b\c\4\y\4\a\x\l\4\m\8\j\k\b\f\s\p\d\d\i\x\h\n\8\f\g\3\i\u\p\5\k\7\i\a\3\m\e\z\4\h\y\p\4\h\1\d\u\6\k\k\n\l\m\q\h\m\n\t\n\5\g\2\4\0\2\m\8\6\w\y\t\t\j\z\j\y\1\g\c\h\i\a\a\5\9\8\q\u\t\9\9\v\s\a\m\s\d\y\x\h\m\0\o\s\q\y\h\k\m\v\6\q\c\f\u\i\z\1\j\e\g\5\0\x\g\k\w\r\d\9\n\s\z\5\i\8\6\e\n\8\v\w\m\h\p\k\j\a\x\o\c\5\y\u\j\1\q\t\q\h\t\5\4\s\d\f\s\d\3\h\9\v\m\q\p\s\s\w\r\u\a\4\n\3\4\f\s\g\u\7\w\d\m\s\q\w\m\5\o\4\n\l\g\4\l\k\8\t\t\q\j\u\h\6\m\q\l\7\k\8\6\5\j\d\b\x\e\c\e\i\o\0\z\h\s\8\u\w\u\g\i\c\4\x\5\g\r\r\v\8\l\h\g\0\f\7\b\d\x\6\c\h\p\f\j\l\q\p\5\2\z\2\d\z\6\s\p\3\0\5\7\3\b\i\b\i\7\c\y\0\s\s\c\q\i\f\b\2\e\d\8\k\1\z\m\2\1\2\c\y\g\v\e\c\5\c\l\a\1\x\5\8\w\5\y\7\n\v\h\b\z\s\z\x\m\c\l\s\i\i\d\l\7\b\r\o\n\t\r\t\z\f\i\m\q\4\s\x\f\q\r\u\2\o\b\n\a\8\e\r\y\t\r\e\n\6\u\r\h\2\7\u\3\e\6\b\f\t\q\j\y\h\k\8\u\2\2\q\y\m\t\8\0\i\f\9\r\2\l\g\7\z\h\e\t\z\h\r\g\6\4\x\l\y\c\k\2\y\5\q\r\h\1\6\w\p\t\b\t\q\1\g\w\7\y\2\g\a\2\u\c\a\1\5\w\r\w\n\j\3\m\e\a\l\1\n\t\f\0\o\4\s\q\e\e\1\u\p\n\9\7\k\z\0\5\2\n\g\b\v\a\8\z\w\i\m\o\9\w\h\5\t\t\w\t\b\f\w\p\5\k\b\l\4\h\8\5\3\s\h\c\a\s\r\r\1\i\6\n\p\u\8\o\m\y\e\l\i\o\f\2\e\9\j\7\f\p\c\t\5\n\x\h\k\n\a\k\o\l\d\m\o\g\1\r\t\7\n\g\w\w\r\e\m\j\i\2\s\j\8\x\m\9\t\w\4\z\4\e\h\m\z\5\a\a\x\f\k\h\1\y\s\r\4\t\7\8\8\v\h\u\i\s\g\d\w\8\7\q\8\8\7\h\2\n\r\u\4\w\i\e\f\t\t\e\o\k\1\u\7\r\d\q\g\1\7\d\n\2\9\h\f\5\t\k\7\8\6\p\1\l\y\l\5\j\d\d\n\g\s\9\0\4\7\o\z\w\4\n\b\o\v\3\i\g\3\i\8\y\1\6\0\q\3\6\d\x\s\c\6\k\j\w\n\7\5\v\k\a\7\o\m\s\3\4\l\n\8\r\q\0\5\5\h\m\g\l\h\7\l\v\r\w\j\6\i\d\m\b\d\h\7\y\p\l\s\t\y\9\m\2\r\b\u\3\8\8\x\j\y\7\l\0\l\5\i\b\u\o\3\v\i\y\r\u\z\v\r\b\z\s\w\y\i\o\x\u\4\l\8\y\u\3\g\r\3\a\w\q\5\a\7\0\p\u\6\0\1\y\j\5\1\0\0\9\2\2\z\c\a\h\t\h\c\8\z\p\p\v\y\5\l\b\0\c\m\a\3\f\w\s\j\n\0\7\r\k\e\4\p\i\8\t\x\6\i\l\b\3\r\z\h\s\5\k\b\c\u\j\m\w\p\k\w\8\8\a\l\3\6\5\q\7\f\8\4\0\7\f\0\a\g\g\v\m\y\5\d\b\k\h\r\o\f\t\5\l\2\p\1\e\6\q\f\c\9\b\0\y\p\h\l\0\9\f\m\5\m\b\7\3\4\b\q\3\w\p\5\1\e\x\0\y\c\s\p\i\t\2\v\h\9\u\a\z\g\g\3\e\n\i\y\z\7\j\7\4\w\a\y\e\x\2\b\y\i\k\b\6\m\a\p\x\w\v\m\1\q\8\g\3\3\e\i\d\j\a\p\p\x\u\n\a\w\k\w\j\1\v\h\h\t\9\1\3\c\6\e\h\4\3\p\4\5\d\a\h\f\n\n\k\e\v\c\l\j\c\v\9\z\n\i\4\z\y\a\6\t\t\y\n\t\m\2\k\2\t\r\3\v\h\i\n\1\o\n\p\m\j\4\f\u\x\q\k\j\h\6\m\w\i\b\2\a\o\1\u\u\j\l\o\p\7\h\3\4\n\l\f\4\v\8\1\d\p\l\4\i\y\0\s\i\8\h\0\o\g\7\b\6\s\x\v\h\8\4\x\r\v\2\8\5\a\z\5\3\f\o\6\1\d\6\c\0\v\1\8\6\9\q\w\8\2\i\o\y\n\u\i\h\6\8\t\q\a\x\j\t\a\l\x\5\e\i\m\2\j\t\i\o\h\c\9\4\j\3\l\u\t\m\o\s\m\w\r\q\w\a\l\k\s\u\h\0\y\p\f\z\y\j\j\9\e\k\n\l\e\7\i\b\v\a\z\l\j\u\2\q\f\s\2\w\i\i\b\k\4\z\w\l\8\o\4\m\u\k\s\v\c\d\8\7\8\e\e\8\x\i\4\v\l\p\k\f\q\s\e\l\l\4\a\8\w\5\9\p\e\7\2\b\f\j\k\p\n\8\5\3\m\j\e\t\z\1\d\m\d\i\k\b\d\g\a\g\n\1\p\s\m\t\a\w\v\o\e\1\n\s\1\5\q\z\b\i\l\5\e\q\c\7\e\4\m\j\p\f\c\k\b\s\e\t\u\d\8\0\7\v\w\g\8\p\m\s\m\c\0\7\d\c\9\n\c\2\6\b\x\7\v\5\2\j\8\n\q\y\5\q\s\7\e\x\1\k\3\x\n\7\5\p\r\7\t\x\m\1\1\q\9\p\7\v\7\9\1\e\c\8\q\u\h\o\0\n\q\r\e\r\5\b\z\0\e\3\i\d\2\6\v\0\3\y\g\l\5\0\8\z\k\1\x\3\5\b\v\o\0\r\7\n\5\6\v\k\q\2\3\d\3\v\s\6\h\7\y\q\8\o\e\3\y\g\m\x\m\j\9\t\b\a\z\k\z\6\8\i\u\f\0\d\5\b\4\5\b\5\5\h\h\6\u\d\6\f\4\r\v\w\x\r\n\x\m\g\1\b\u\d\d\2\o\l\j\l\x\l\z\b\r\1\0\c\s\z\8\6\z\m\h\j\c\e\h\t\0\3\c\k\t\m\t\e\o\f\2\y\w\o\h\t\k\t\3\z\i\j\a\n\f\y\h\5\c\5\5\u\n\m\l\4\8\4\c\x\i\v\2\6\1\d\7\3\a\0\4\1\e\1\w\9\n\a\2\g\5\y\w\6\g\o\u\6\v\f\8\c\f\q\7\m\z\1\8\8\b\5\z\p\w\t\e\c\n\m\3\a\0\0\0\3\p\a\3\x\z\4\b\7\6\z\z\1\s\d\6\l\s\3\r\e\3\d\y\n\3\v\a\p\y\f\w\d\k\v\1\e\5\g\2\c\o\w\m\g\s\l\o\7\4\v\s\d\n\1\9\t\i\j\f\y\f\z\f\g\2\q\h\f\t\g\r\0\5\e\y\p\m\g\c\q\y\0\d\r\y\l\d\7\s\t\d\q\1\m\a\c\u\a\8\k\8\b\z\7\5\2\o\5\8\n\l\n\d\4\z\r\6\9\3\k\4\5\m\y\j\p\m\m\m\w\f\4\v\d\h\c\z\w\x\1\u\3\m\t\r\d\7\5\z\f\y\5\e\5\p\m\s\k\e\3\n\f\m\8\n\5\2\w\9\g\g\t\j\s\i\w\a\f\g\r\s\e\m\e\t\7\p\o\q\3\r\j\4\m\u\0\h\m\n\y\t\b\k\d\g\f\f\c\t\m\8\9\o\v\t\v\z\y\i\9\o\8\g\l\e\g\8\7\q\5\3\6\f\z\5\s\n\x\j\p\q\t\g\d\h\5\f\a\t\k\m\f\h\c\6\a\b\p\r\f\8\k\b\z\o\z\a\d\9\1\4\e\0\q\v\z\q\e\y\i\2\u\p\a\s\2\s\5\r\7\a\c\t\a\n\q\j\h\n\j\8\y\f\t\r\3\m\r\1\3\0\2\w\r\t\p\d\i\r\r\8\c\4\9\e\t\r\q\r\2\a\u\x\s\f\3\l\8\l\p\s\b\x\w\c\8\j\v\d\q\3\l\g\f\3\2\o\o\n\h\5\2\g\r\e\p\y\k\h\3\k\o\v\3\m\o\3\y\u\w\b\e\0\0\b\t\1\q\o\s\7\g\r\1\g\d\l\g\6\v\q\q\2\i\s\7\t\b\r\i\k\6\z\d\7\m\4\2\1\f\w\m\k\7\b\w\p\9\k\j\9\v\w\p\b\5\6\4\4\p\m\j\w\f\h\0\h\e\o\w\7\j\r\l\w\0\f\2\n\g\5\2\v\t\i\i\y\m\w\c\0\g\l\z\r\l\g\e\1\5\t\5\8\8\1\d\p\w\7\u\7\4\8\4\v\p\5\j\f\q\j\l\i\9\o\o\n\1\3\k\d\4\0\1\e\3\t\a\n\z\v\b\u\w\t\0\s\r\y\w\6\x\u\x\z\r\f\g\5\r\9\0\3\2\t\w\a\k\g\2\h\b\t\a\h\w\m\w\s\i\8\9\d\r\r\j\e\b\6\o\m\y\v\o\9\b\3\q\y\t\i\1\l\a\4\r\o\1\q\b\5\v\q\k\s\w\4\0\8\5\e\4\s\o\k\y\4\o\0\7\r\3\f\u\t\3\j\g\k\t\o\t\x\6\p\8\s\k\w\z\b\v\6\f\x\j\k\6\b\f\t\y\x\w\8\y\7\r\v\0\m\v\d\v\1\h\3\0\1\a\l\j\7\1\7\w\4\l\y\8\c\h\p\9\f\f\e\2\m\3\l\2\y\u\8\y\d\5\i\o\9\z\b\4\l\t\w\d\a\c\2\j\s\x\p\e\9\o\l\p\b\f\x\2\b\k\n\k\t\6\h\4\l\c\a\c\g\g\y\k\x\s\i\o\3\8\w\9\c\0\n\u\l\7\z\f\2\h\h\h\t\d\i\4\s\8\w\o\4\y\9\1\y\n\s\8\5\u\x\0\n\7\b\5\3\3\u\s\b\n\r\c\w\x\k\2\1\l\3\5\l\p\2\t\8\q\q\v\h\7\u\p\e\r\o\z\b\x\y\i\m\r\9\r\7\x\0\j\v\h\9\3\t\x\c\x\u\l\p\4\6\c\h\4\8\3\1\b\e\w\6\g\v\e\u\6\a\i\3\0\e\v\l\f\r\m\q\n\d\p\5\h\4\g\p\q\i\m\8\v\s\a\3\q\3\a\2\4\4\v\5\m\o\0\v\v\g\5\2\0\2\y\6\m\y\k\0\e\i\6\9\j\r\a\1\j\s\v\0\d\0\e\8\e\a\1\p\e\l\g\2\h\2\n\b\3\p\8\n\9\6\e\t\v\r\c\o\1\9\w\0\5\7\1\u\y\e\y\d\r\3\b\x\5\u ]] 00:05:20.500 00:05:20.500 real 0m1.244s 00:05:20.500 user 0m0.810s 00:05:20.500 sys 0m0.580s 00:05:20.500 ************************************ 00:05:20.500 END TEST dd_rw_offset 00:05:20.500 ************************************ 00:05:20.500 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.500 20:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:20.500 20:28:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:20.500 [2024-11-26 20:28:35.051282] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:20.500 [2024-11-26 20:28:35.051546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59453 ] 00:05:20.762 { 00:05:20.762 "subsystems": [ 00:05:20.762 { 00:05:20.762 "subsystem": "bdev", 00:05:20.762 "config": [ 00:05:20.762 { 00:05:20.762 "params": { 00:05:20.762 "trtype": "pcie", 00:05:20.762 "traddr": "0000:00:10.0", 00:05:20.762 "name": "Nvme0" 00:05:20.762 }, 00:05:20.762 "method": "bdev_nvme_attach_controller" 00:05:20.762 }, 00:05:20.762 { 00:05:20.762 "method": "bdev_wait_for_examine" 00:05:20.762 } 00:05:20.762 ] 00:05:20.762 } 00:05:20.762 ] 00:05:20.762 } 00:05:20.762 [2024-11-26 20:28:35.193417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.762 [2024-11-26 20:28:35.250622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.762 [2024-11-26 20:28:35.307703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.023  [2024-11-26T20:28:35.839Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:21.284 00:05:21.284 20:28:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:21.284 ************************************ 00:05:21.284 END TEST spdk_dd_basic_rw 00:05:21.284 ************************************ 00:05:21.284 00:05:21.284 real 0m14.888s 00:05:21.284 user 0m10.120s 00:05:21.284 sys 0m5.403s 00:05:21.284 20:28:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.284 20:28:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:21.284 20:28:35 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:21.284 20:28:35 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.284 20:28:35 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.284 20:28:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:21.284 ************************************ 00:05:21.284 START TEST spdk_dd_posix 00:05:21.284 ************************************ 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:21.284 * Looking for test storage... 00:05:21.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.284 --rc genhtml_branch_coverage=1 00:05:21.284 --rc genhtml_function_coverage=1 00:05:21.284 --rc genhtml_legend=1 00:05:21.284 --rc geninfo_all_blocks=1 00:05:21.284 --rc geninfo_unexecuted_blocks=1 00:05:21.284 00:05:21.284 ' 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.284 --rc genhtml_branch_coverage=1 00:05:21.284 --rc genhtml_function_coverage=1 00:05:21.284 --rc genhtml_legend=1 00:05:21.284 --rc geninfo_all_blocks=1 00:05:21.284 --rc geninfo_unexecuted_blocks=1 00:05:21.284 00:05:21.284 ' 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.284 --rc genhtml_branch_coverage=1 00:05:21.284 --rc genhtml_function_coverage=1 00:05:21.284 --rc genhtml_legend=1 00:05:21.284 --rc geninfo_all_blocks=1 00:05:21.284 --rc geninfo_unexecuted_blocks=1 00:05:21.284 00:05:21.284 ' 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.284 --rc genhtml_branch_coverage=1 00:05:21.284 --rc genhtml_function_coverage=1 00:05:21.284 --rc genhtml_legend=1 00:05:21.284 --rc geninfo_all_blocks=1 00:05:21.284 --rc geninfo_unexecuted_blocks=1 00:05:21.284 00:05:21.284 ' 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.284 20:28:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:21.285 * First test run, liburing in use 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:21.285 ************************************ 00:05:21.285 START TEST dd_flag_append 00:05:21.285 ************************************ 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=urrebuezwpf7b5rpt4k6zlvxu3q7q9du 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:21.285 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:21.545 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=cebro5d3vjqluul1e9bw28o8hfehsarx 00:05:21.545 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s urrebuezwpf7b5rpt4k6zlvxu3q7q9du 00:05:21.545 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s cebro5d3vjqluul1e9bw28o8hfehsarx 00:05:21.545 20:28:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:21.545 [2024-11-26 20:28:35.880025] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:21.545 [2024-11-26 20:28:35.880284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59525 ] 00:05:21.545 [2024-11-26 20:28:36.018697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.545 [2024-11-26 20:28:36.066121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.806 [2024-11-26 20:28:36.110380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.806  [2024-11-26T20:28:36.361Z] Copying: 32/32 [B] (average 31 kBps) 00:05:21.806 00:05:21.806 ************************************ 00:05:21.806 END TEST dd_flag_append 00:05:21.806 ************************************ 00:05:21.806 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ cebro5d3vjqluul1e9bw28o8hfehsarxurrebuezwpf7b5rpt4k6zlvxu3q7q9du == \c\e\b\r\o\5\d\3\v\j\q\l\u\u\l\1\e\9\b\w\2\8\o\8\h\f\e\h\s\a\r\x\u\r\r\e\b\u\e\z\w\p\f\7\b\5\r\p\t\4\k\6\z\l\v\x\u\3\q\7\q\9\d\u ]] 00:05:21.806 00:05:21.806 real 0m0.472s 00:05:21.806 user 0m0.245s 00:05:21.806 sys 0m0.224s 00:05:21.806 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.806 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:21.806 20:28:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:21.806 20:28:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.806 20:28:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.806 20:28:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:22.066 ************************************ 00:05:22.066 START TEST dd_flag_directory 00:05:22.066 ************************************ 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:22.066 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:22.066 [2024-11-26 20:28:36.412897] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:22.066 [2024-11-26 20:28:36.412979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59554 ] 00:05:22.066 [2024-11-26 20:28:36.554585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.066 [2024-11-26 20:28:36.602363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.326 [2024-11-26 20:28:36.646646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.326 [2024-11-26 20:28:36.687416] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:22.326 [2024-11-26 20:28:36.687479] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:22.326 [2024-11-26 20:28:36.687493] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.326 [2024-11-26 20:28:36.785484] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:22.326 20:28:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:22.586 [2024-11-26 20:28:36.882096] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:22.586 [2024-11-26 20:28:36.882184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59563 ] 00:05:22.586 [2024-11-26 20:28:37.021927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.586 [2024-11-26 20:28:37.079223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.586 [2024-11-26 20:28:37.132452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.847 [2024-11-26 20:28:37.175154] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:22.847 [2024-11-26 20:28:37.175215] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:22.847 [2024-11-26 20:28:37.175228] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.847 [2024-11-26 20:28:37.281923] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.847 00:05:22.847 real 0m0.965s 00:05:22.847 user 0m0.524s 00:05:22.847 sys 0m0.228s 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.847 ************************************ 00:05:22.847 END TEST dd_flag_directory 00:05:22.847 ************************************ 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:22.847 ************************************ 00:05:22.847 START TEST dd_flag_nofollow 00:05:22.847 ************************************ 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:22.847 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:23.108 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:23.108 [2024-11-26 20:28:37.449236] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:23.108 [2024-11-26 20:28:37.449349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59592 ] 00:05:23.108 [2024-11-26 20:28:37.595045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.108 [2024-11-26 20:28:37.639550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.368 [2024-11-26 20:28:37.677332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.368 [2024-11-26 20:28:37.707832] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:23.368 [2024-11-26 20:28:37.707878] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:23.368 [2024-11-26 20:28:37.707889] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.368 [2024-11-26 20:28:37.779924] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:23.368 20:28:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:23.368 [2024-11-26 20:28:37.869964] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:23.368 [2024-11-26 20:28:37.870039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:05:23.668 [2024-11-26 20:28:38.011333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.668 [2024-11-26 20:28:38.054628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.668 [2024-11-26 20:28:38.095850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.668 [2024-11-26 20:28:38.132245] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:23.668 [2024-11-26 20:28:38.132303] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:23.668 [2024-11-26 20:28:38.132319] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.955 [2024-11-26 20:28:38.223029] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:23.955 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:23.955 [2024-11-26 20:28:38.321993] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:23.955 [2024-11-26 20:28:38.322083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59609 ] 00:05:23.955 [2024-11-26 20:28:38.466485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.217 [2024-11-26 20:28:38.518908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.218 [2024-11-26 20:28:38.566082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.218  [2024-11-26T20:28:38.773Z] Copying: 512/512 [B] (average 500 kBps) 00:05:24.218 00:05:24.218 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 38sx0qf7na0m9howe4uzlytn8qvgbxlv3nduz2gqx89hc7eyynr1w066n6723yzb7yd9mudit8vjh0n2i64tzqlfgblpi3st5k8c9ide0622ilgt1aqu2v64jhbewatzb96311xfg2jd5itprbyvjrw53p2bonc5oxx0y6duf5kwprzf36h7vrzzl866u533amekzhka94bm6zdetwm9bunb5cmmrvtzod8ezsm41wple0qjptl658hjvtnwbnc1jkckuygs9dt3m1swm57q1vbf4gcj4eugnbbnxcvaxyzbschmj5hoyrnsh117hk3yay0tayhmwsfliawipqlrsu5wueajoi3odxaunyw8rtrhjwraoyiucu5dupdrv5s8747a2a36svk450f2uqmwanyrrbx6d4zwqjq22hhs4fmnbk2n7oxkwqnnoc80hrulqh25n38l6m9tuu0q4u3bskvcuu0c1rkavry5xk66jta2i5j6sjj52kcbadhmiwjw == \3\8\s\x\0\q\f\7\n\a\0\m\9\h\o\w\e\4\u\z\l\y\t\n\8\q\v\g\b\x\l\v\3\n\d\u\z\2\g\q\x\8\9\h\c\7\e\y\y\n\r\1\w\0\6\6\n\6\7\2\3\y\z\b\7\y\d\9\m\u\d\i\t\8\v\j\h\0\n\2\i\6\4\t\z\q\l\f\g\b\l\p\i\3\s\t\5\k\8\c\9\i\d\e\0\6\2\2\i\l\g\t\1\a\q\u\2\v\6\4\j\h\b\e\w\a\t\z\b\9\6\3\1\1\x\f\g\2\j\d\5\i\t\p\r\b\y\v\j\r\w\5\3\p\2\b\o\n\c\5\o\x\x\0\y\6\d\u\f\5\k\w\p\r\z\f\3\6\h\7\v\r\z\z\l\8\6\6\u\5\3\3\a\m\e\k\z\h\k\a\9\4\b\m\6\z\d\e\t\w\m\9\b\u\n\b\5\c\m\m\r\v\t\z\o\d\8\e\z\s\m\4\1\w\p\l\e\0\q\j\p\t\l\6\5\8\h\j\v\t\n\w\b\n\c\1\j\k\c\k\u\y\g\s\9\d\t\3\m\1\s\w\m\5\7\q\1\v\b\f\4\g\c\j\4\e\u\g\n\b\b\n\x\c\v\a\x\y\z\b\s\c\h\m\j\5\h\o\y\r\n\s\h\1\1\7\h\k\3\y\a\y\0\t\a\y\h\m\w\s\f\l\i\a\w\i\p\q\l\r\s\u\5\w\u\e\a\j\o\i\3\o\d\x\a\u\n\y\w\8\r\t\r\h\j\w\r\a\o\y\i\u\c\u\5\d\u\p\d\r\v\5\s\8\7\4\7\a\2\a\3\6\s\v\k\4\5\0\f\2\u\q\m\w\a\n\y\r\r\b\x\6\d\4\z\w\q\j\q\2\2\h\h\s\4\f\m\n\b\k\2\n\7\o\x\k\w\q\n\n\o\c\8\0\h\r\u\l\q\h\2\5\n\3\8\l\6\m\9\t\u\u\0\q\4\u\3\b\s\k\v\c\u\u\0\c\1\r\k\a\v\r\y\5\x\k\6\6\j\t\a\2\i\5\j\6\s\j\j\5\2\k\c\b\a\d\h\m\i\w\j\w ]] 00:05:24.218 00:05:24.218 real 0m1.364s 00:05:24.218 user 0m0.709s 00:05:24.218 sys 0m0.442s 00:05:24.218 ************************************ 00:05:24.218 END TEST dd_flag_nofollow 00:05:24.218 ************************************ 00:05:24.218 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.218 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:24.480 ************************************ 00:05:24.480 START TEST dd_flag_noatime 00:05:24.480 ************************************ 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732652918 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732652918 00:05:24.480 20:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:25.424 20:28:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:25.424 [2024-11-26 20:28:39.893581] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:25.424 [2024-11-26 20:28:39.893706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59651 ] 00:05:25.685 [2024-11-26 20:28:40.038814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.685 [2024-11-26 20:28:40.097956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.685 [2024-11-26 20:28:40.153903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.685  [2024-11-26T20:28:40.499Z] Copying: 512/512 [B] (average 500 kBps) 00:05:25.944 00:05:25.944 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:25.944 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732652918 )) 00:05:25.944 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:25.944 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732652918 )) 00:05:25.944 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:25.944 [2024-11-26 20:28:40.433534] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:25.944 [2024-11-26 20:28:40.433970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:05:26.204 [2024-11-26 20:28:40.575883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.204 [2024-11-26 20:28:40.631979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.204 [2024-11-26 20:28:40.685464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.204  [2024-11-26T20:28:41.019Z] Copying: 512/512 [B] (average 500 kBps) 00:05:26.464 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:26.464 ************************************ 00:05:26.464 END TEST dd_flag_noatime 00:05:26.464 ************************************ 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732652920 )) 00:05:26.464 00:05:26.464 real 0m2.078s 00:05:26.464 user 0m0.560s 00:05:26.464 sys 0m0.549s 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:26.464 ************************************ 00:05:26.464 START TEST dd_flags_misc 00:05:26.464 ************************************ 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:26.464 20:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:26.726 [2024-11-26 20:28:41.028333] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:26.726 [2024-11-26 20:28:41.028436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59693 ] 00:05:26.726 [2024-11-26 20:28:41.172714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.726 [2024-11-26 20:28:41.234278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.988 [2024-11-26 20:28:41.292118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.988  [2024-11-26T20:28:41.543Z] Copying: 512/512 [B] (average 500 kBps) 00:05:26.988 00:05:26.988 20:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sx0o23cs3ujnrqytdyu6wwucstk5b7eal20h8mzi3urds8m3o2x70qhj396yy1xq67nzg1qzh8rer81bio65d0usdz050k13b57j6yndz7o7qgcw8v67cbc3s4e5k9bqxwajyc7pabmrgdy14l24z3gwaimvs4dimd3pg3da7er6nwr9g1bpmdlglsucfi9polqrfo7rtfvk7gsossgw1nm9pfkzqqif8fb7dptakbk5az4v9ri6wr19lim2gbecmq08ukikdslt5g95i095lbtqtxxmnrz0c8czv5u8qvjn24ae1mbkpqeng9wpnhvmn1e30zg4fzlskctvek6hhavgcdt7jl6tolef56ur6nf165g7l6ts21wn1dj9tiapjbodimzuwo2c1uaw4xsopgm5ay08wz0655f3kaagciz1zfzxqze834qcjmr47uji6g6ffc3kftcrnfx5xil6kumy96n7jnu13s28ipn8rtvi5hjrmeovmwfo9sp8pi11 == \s\x\0\o\2\3\c\s\3\u\j\n\r\q\y\t\d\y\u\6\w\w\u\c\s\t\k\5\b\7\e\a\l\2\0\h\8\m\z\i\3\u\r\d\s\8\m\3\o\2\x\7\0\q\h\j\3\9\6\y\y\1\x\q\6\7\n\z\g\1\q\z\h\8\r\e\r\8\1\b\i\o\6\5\d\0\u\s\d\z\0\5\0\k\1\3\b\5\7\j\6\y\n\d\z\7\o\7\q\g\c\w\8\v\6\7\c\b\c\3\s\4\e\5\k\9\b\q\x\w\a\j\y\c\7\p\a\b\m\r\g\d\y\1\4\l\2\4\z\3\g\w\a\i\m\v\s\4\d\i\m\d\3\p\g\3\d\a\7\e\r\6\n\w\r\9\g\1\b\p\m\d\l\g\l\s\u\c\f\i\9\p\o\l\q\r\f\o\7\r\t\f\v\k\7\g\s\o\s\s\g\w\1\n\m\9\p\f\k\z\q\q\i\f\8\f\b\7\d\p\t\a\k\b\k\5\a\z\4\v\9\r\i\6\w\r\1\9\l\i\m\2\g\b\e\c\m\q\0\8\u\k\i\k\d\s\l\t\5\g\9\5\i\0\9\5\l\b\t\q\t\x\x\m\n\r\z\0\c\8\c\z\v\5\u\8\q\v\j\n\2\4\a\e\1\m\b\k\p\q\e\n\g\9\w\p\n\h\v\m\n\1\e\3\0\z\g\4\f\z\l\s\k\c\t\v\e\k\6\h\h\a\v\g\c\d\t\7\j\l\6\t\o\l\e\f\5\6\u\r\6\n\f\1\6\5\g\7\l\6\t\s\2\1\w\n\1\d\j\9\t\i\a\p\j\b\o\d\i\m\z\u\w\o\2\c\1\u\a\w\4\x\s\o\p\g\m\5\a\y\0\8\w\z\0\6\5\5\f\3\k\a\a\g\c\i\z\1\z\f\z\x\q\z\e\8\3\4\q\c\j\m\r\4\7\u\j\i\6\g\6\f\f\c\3\k\f\t\c\r\n\f\x\5\x\i\l\6\k\u\m\y\9\6\n\7\j\n\u\1\3\s\2\8\i\p\n\8\r\t\v\i\5\h\j\r\m\e\o\v\m\w\f\o\9\s\p\8\p\i\1\1 ]] 00:05:26.988 20:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:26.988 20:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:27.250 [2024-11-26 20:28:41.551611] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:27.250 [2024-11-26 20:28:41.551696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59703 ] 00:05:27.250 [2024-11-26 20:28:41.690240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.250 [2024-11-26 20:28:41.743383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.250 [2024-11-26 20:28:41.791964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.589  [2024-11-26T20:28:42.144Z] Copying: 512/512 [B] (average 500 kBps) 00:05:27.589 00:05:27.589 20:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sx0o23cs3ujnrqytdyu6wwucstk5b7eal20h8mzi3urds8m3o2x70qhj396yy1xq67nzg1qzh8rer81bio65d0usdz050k13b57j6yndz7o7qgcw8v67cbc3s4e5k9bqxwajyc7pabmrgdy14l24z3gwaimvs4dimd3pg3da7er6nwr9g1bpmdlglsucfi9polqrfo7rtfvk7gsossgw1nm9pfkzqqif8fb7dptakbk5az4v9ri6wr19lim2gbecmq08ukikdslt5g95i095lbtqtxxmnrz0c8czv5u8qvjn24ae1mbkpqeng9wpnhvmn1e30zg4fzlskctvek6hhavgcdt7jl6tolef56ur6nf165g7l6ts21wn1dj9tiapjbodimzuwo2c1uaw4xsopgm5ay08wz0655f3kaagciz1zfzxqze834qcjmr47uji6g6ffc3kftcrnfx5xil6kumy96n7jnu13s28ipn8rtvi5hjrmeovmwfo9sp8pi11 == \s\x\0\o\2\3\c\s\3\u\j\n\r\q\y\t\d\y\u\6\w\w\u\c\s\t\k\5\b\7\e\a\l\2\0\h\8\m\z\i\3\u\r\d\s\8\m\3\o\2\x\7\0\q\h\j\3\9\6\y\y\1\x\q\6\7\n\z\g\1\q\z\h\8\r\e\r\8\1\b\i\o\6\5\d\0\u\s\d\z\0\5\0\k\1\3\b\5\7\j\6\y\n\d\z\7\o\7\q\g\c\w\8\v\6\7\c\b\c\3\s\4\e\5\k\9\b\q\x\w\a\j\y\c\7\p\a\b\m\r\g\d\y\1\4\l\2\4\z\3\g\w\a\i\m\v\s\4\d\i\m\d\3\p\g\3\d\a\7\e\r\6\n\w\r\9\g\1\b\p\m\d\l\g\l\s\u\c\f\i\9\p\o\l\q\r\f\o\7\r\t\f\v\k\7\g\s\o\s\s\g\w\1\n\m\9\p\f\k\z\q\q\i\f\8\f\b\7\d\p\t\a\k\b\k\5\a\z\4\v\9\r\i\6\w\r\1\9\l\i\m\2\g\b\e\c\m\q\0\8\u\k\i\k\d\s\l\t\5\g\9\5\i\0\9\5\l\b\t\q\t\x\x\m\n\r\z\0\c\8\c\z\v\5\u\8\q\v\j\n\2\4\a\e\1\m\b\k\p\q\e\n\g\9\w\p\n\h\v\m\n\1\e\3\0\z\g\4\f\z\l\s\k\c\t\v\e\k\6\h\h\a\v\g\c\d\t\7\j\l\6\t\o\l\e\f\5\6\u\r\6\n\f\1\6\5\g\7\l\6\t\s\2\1\w\n\1\d\j\9\t\i\a\p\j\b\o\d\i\m\z\u\w\o\2\c\1\u\a\w\4\x\s\o\p\g\m\5\a\y\0\8\w\z\0\6\5\5\f\3\k\a\a\g\c\i\z\1\z\f\z\x\q\z\e\8\3\4\q\c\j\m\r\4\7\u\j\i\6\g\6\f\f\c\3\k\f\t\c\r\n\f\x\5\x\i\l\6\k\u\m\y\9\6\n\7\j\n\u\1\3\s\2\8\i\p\n\8\r\t\v\i\5\h\j\r\m\e\o\v\m\w\f\o\9\s\p\8\p\i\1\1 ]] 00:05:27.589 20:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:27.589 20:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:27.589 [2024-11-26 20:28:42.014956] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:27.589 [2024-11-26 20:28:42.015038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59712 ] 00:05:27.882 [2024-11-26 20:28:42.157748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.882 [2024-11-26 20:28:42.212548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.882 [2024-11-26 20:28:42.259092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.882  [2024-11-26T20:28:42.699Z] Copying: 512/512 [B] (average 71 kBps) 00:05:28.144 00:05:28.144 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sx0o23cs3ujnrqytdyu6wwucstk5b7eal20h8mzi3urds8m3o2x70qhj396yy1xq67nzg1qzh8rer81bio65d0usdz050k13b57j6yndz7o7qgcw8v67cbc3s4e5k9bqxwajyc7pabmrgdy14l24z3gwaimvs4dimd3pg3da7er6nwr9g1bpmdlglsucfi9polqrfo7rtfvk7gsossgw1nm9pfkzqqif8fb7dptakbk5az4v9ri6wr19lim2gbecmq08ukikdslt5g95i095lbtqtxxmnrz0c8czv5u8qvjn24ae1mbkpqeng9wpnhvmn1e30zg4fzlskctvek6hhavgcdt7jl6tolef56ur6nf165g7l6ts21wn1dj9tiapjbodimzuwo2c1uaw4xsopgm5ay08wz0655f3kaagciz1zfzxqze834qcjmr47uji6g6ffc3kftcrnfx5xil6kumy96n7jnu13s28ipn8rtvi5hjrmeovmwfo9sp8pi11 == \s\x\0\o\2\3\c\s\3\u\j\n\r\q\y\t\d\y\u\6\w\w\u\c\s\t\k\5\b\7\e\a\l\2\0\h\8\m\z\i\3\u\r\d\s\8\m\3\o\2\x\7\0\q\h\j\3\9\6\y\y\1\x\q\6\7\n\z\g\1\q\z\h\8\r\e\r\8\1\b\i\o\6\5\d\0\u\s\d\z\0\5\0\k\1\3\b\5\7\j\6\y\n\d\z\7\o\7\q\g\c\w\8\v\6\7\c\b\c\3\s\4\e\5\k\9\b\q\x\w\a\j\y\c\7\p\a\b\m\r\g\d\y\1\4\l\2\4\z\3\g\w\a\i\m\v\s\4\d\i\m\d\3\p\g\3\d\a\7\e\r\6\n\w\r\9\g\1\b\p\m\d\l\g\l\s\u\c\f\i\9\p\o\l\q\r\f\o\7\r\t\f\v\k\7\g\s\o\s\s\g\w\1\n\m\9\p\f\k\z\q\q\i\f\8\f\b\7\d\p\t\a\k\b\k\5\a\z\4\v\9\r\i\6\w\r\1\9\l\i\m\2\g\b\e\c\m\q\0\8\u\k\i\k\d\s\l\t\5\g\9\5\i\0\9\5\l\b\t\q\t\x\x\m\n\r\z\0\c\8\c\z\v\5\u\8\q\v\j\n\2\4\a\e\1\m\b\k\p\q\e\n\g\9\w\p\n\h\v\m\n\1\e\3\0\z\g\4\f\z\l\s\k\c\t\v\e\k\6\h\h\a\v\g\c\d\t\7\j\l\6\t\o\l\e\f\5\6\u\r\6\n\f\1\6\5\g\7\l\6\t\s\2\1\w\n\1\d\j\9\t\i\a\p\j\b\o\d\i\m\z\u\w\o\2\c\1\u\a\w\4\x\s\o\p\g\m\5\a\y\0\8\w\z\0\6\5\5\f\3\k\a\a\g\c\i\z\1\z\f\z\x\q\z\e\8\3\4\q\c\j\m\r\4\7\u\j\i\6\g\6\f\f\c\3\k\f\t\c\r\n\f\x\5\x\i\l\6\k\u\m\y\9\6\n\7\j\n\u\1\3\s\2\8\i\p\n\8\r\t\v\i\5\h\j\r\m\e\o\v\m\w\f\o\9\s\p\8\p\i\1\1 ]] 00:05:28.144 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:28.144 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:28.144 [2024-11-26 20:28:42.486163] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:28.144 [2024-11-26 20:28:42.486229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59722 ] 00:05:28.144 [2024-11-26 20:28:42.626992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.144 [2024-11-26 20:28:42.678424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.405 [2024-11-26 20:28:42.726800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.405  [2024-11-26T20:28:42.960Z] Copying: 512/512 [B] (average 166 kBps) 00:05:28.405 00:05:28.405 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sx0o23cs3ujnrqytdyu6wwucstk5b7eal20h8mzi3urds8m3o2x70qhj396yy1xq67nzg1qzh8rer81bio65d0usdz050k13b57j6yndz7o7qgcw8v67cbc3s4e5k9bqxwajyc7pabmrgdy14l24z3gwaimvs4dimd3pg3da7er6nwr9g1bpmdlglsucfi9polqrfo7rtfvk7gsossgw1nm9pfkzqqif8fb7dptakbk5az4v9ri6wr19lim2gbecmq08ukikdslt5g95i095lbtqtxxmnrz0c8czv5u8qvjn24ae1mbkpqeng9wpnhvmn1e30zg4fzlskctvek6hhavgcdt7jl6tolef56ur6nf165g7l6ts21wn1dj9tiapjbodimzuwo2c1uaw4xsopgm5ay08wz0655f3kaagciz1zfzxqze834qcjmr47uji6g6ffc3kftcrnfx5xil6kumy96n7jnu13s28ipn8rtvi5hjrmeovmwfo9sp8pi11 == \s\x\0\o\2\3\c\s\3\u\j\n\r\q\y\t\d\y\u\6\w\w\u\c\s\t\k\5\b\7\e\a\l\2\0\h\8\m\z\i\3\u\r\d\s\8\m\3\o\2\x\7\0\q\h\j\3\9\6\y\y\1\x\q\6\7\n\z\g\1\q\z\h\8\r\e\r\8\1\b\i\o\6\5\d\0\u\s\d\z\0\5\0\k\1\3\b\5\7\j\6\y\n\d\z\7\o\7\q\g\c\w\8\v\6\7\c\b\c\3\s\4\e\5\k\9\b\q\x\w\a\j\y\c\7\p\a\b\m\r\g\d\y\1\4\l\2\4\z\3\g\w\a\i\m\v\s\4\d\i\m\d\3\p\g\3\d\a\7\e\r\6\n\w\r\9\g\1\b\p\m\d\l\g\l\s\u\c\f\i\9\p\o\l\q\r\f\o\7\r\t\f\v\k\7\g\s\o\s\s\g\w\1\n\m\9\p\f\k\z\q\q\i\f\8\f\b\7\d\p\t\a\k\b\k\5\a\z\4\v\9\r\i\6\w\r\1\9\l\i\m\2\g\b\e\c\m\q\0\8\u\k\i\k\d\s\l\t\5\g\9\5\i\0\9\5\l\b\t\q\t\x\x\m\n\r\z\0\c\8\c\z\v\5\u\8\q\v\j\n\2\4\a\e\1\m\b\k\p\q\e\n\g\9\w\p\n\h\v\m\n\1\e\3\0\z\g\4\f\z\l\s\k\c\t\v\e\k\6\h\h\a\v\g\c\d\t\7\j\l\6\t\o\l\e\f\5\6\u\r\6\n\f\1\6\5\g\7\l\6\t\s\2\1\w\n\1\d\j\9\t\i\a\p\j\b\o\d\i\m\z\u\w\o\2\c\1\u\a\w\4\x\s\o\p\g\m\5\a\y\0\8\w\z\0\6\5\5\f\3\k\a\a\g\c\i\z\1\z\f\z\x\q\z\e\8\3\4\q\c\j\m\r\4\7\u\j\i\6\g\6\f\f\c\3\k\f\t\c\r\n\f\x\5\x\i\l\6\k\u\m\y\9\6\n\7\j\n\u\1\3\s\2\8\i\p\n\8\r\t\v\i\5\h\j\r\m\e\o\v\m\w\f\o\9\s\p\8\p\i\1\1 ]] 00:05:28.405 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:28.405 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:28.405 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:28.405 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:28.405 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:28.405 20:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:28.405 [2024-11-26 20:28:42.955608] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:28.406 [2024-11-26 20:28:42.955718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:05:28.666 [2024-11-26 20:28:43.103230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.666 [2024-11-26 20:28:43.160049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.666 [2024-11-26 20:28:43.212851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.927  [2024-11-26T20:28:43.482Z] Copying: 512/512 [B] (average 500 kBps) 00:05:28.927 00:05:28.927 20:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f6skzeep7gbqifd1tzorso6u6jsgecjh50dlvis8nbtd5wxbfpr6qo5sfwqm6h63e3tgc04tydhpw02r1t6ry2vgtvp3hqpw4ve3tfqwk5n06hl3xzvq7sj364lh05hj4nnngz1y3nvfpo98y6dd4id9cgwchaut2rc5hkpa5a86eczvedks41vetpmzkcm6g2qaplfcjz99ev0cyegcrt4dvail2y2zaz3jz5zj4n8lpnbjf31l1d06penfsltyjgm9nelqizquyudqphkn8cnur3joeqzs30dfwiroyc2m6pnr4jxecvcafh2jdwa244b8g331whnivqy887ig7ff7guyuepsh4651lh0qszf53xrxnowqvo6p9nes2g43dkvcix4fns6s0y629f1yuxz17smaa3v4c8witj3thpklobkn1l8jgmnovnt2noxoa2y4taicdae5e17oxeflr31s118ubc0a9iarxqsa2hbigc222vzw8o75r3vkzwrs == \f\6\s\k\z\e\e\p\7\g\b\q\i\f\d\1\t\z\o\r\s\o\6\u\6\j\s\g\e\c\j\h\5\0\d\l\v\i\s\8\n\b\t\d\5\w\x\b\f\p\r\6\q\o\5\s\f\w\q\m\6\h\6\3\e\3\t\g\c\0\4\t\y\d\h\p\w\0\2\r\1\t\6\r\y\2\v\g\t\v\p\3\h\q\p\w\4\v\e\3\t\f\q\w\k\5\n\0\6\h\l\3\x\z\v\q\7\s\j\3\6\4\l\h\0\5\h\j\4\n\n\n\g\z\1\y\3\n\v\f\p\o\9\8\y\6\d\d\4\i\d\9\c\g\w\c\h\a\u\t\2\r\c\5\h\k\p\a\5\a\8\6\e\c\z\v\e\d\k\s\4\1\v\e\t\p\m\z\k\c\m\6\g\2\q\a\p\l\f\c\j\z\9\9\e\v\0\c\y\e\g\c\r\t\4\d\v\a\i\l\2\y\2\z\a\z\3\j\z\5\z\j\4\n\8\l\p\n\b\j\f\3\1\l\1\d\0\6\p\e\n\f\s\l\t\y\j\g\m\9\n\e\l\q\i\z\q\u\y\u\d\q\p\h\k\n\8\c\n\u\r\3\j\o\e\q\z\s\3\0\d\f\w\i\r\o\y\c\2\m\6\p\n\r\4\j\x\e\c\v\c\a\f\h\2\j\d\w\a\2\4\4\b\8\g\3\3\1\w\h\n\i\v\q\y\8\8\7\i\g\7\f\f\7\g\u\y\u\e\p\s\h\4\6\5\1\l\h\0\q\s\z\f\5\3\x\r\x\n\o\w\q\v\o\6\p\9\n\e\s\2\g\4\3\d\k\v\c\i\x\4\f\n\s\6\s\0\y\6\2\9\f\1\y\u\x\z\1\7\s\m\a\a\3\v\4\c\8\w\i\t\j\3\t\h\p\k\l\o\b\k\n\1\l\8\j\g\m\n\o\v\n\t\2\n\o\x\o\a\2\y\4\t\a\i\c\d\a\e\5\e\1\7\o\x\e\f\l\r\3\1\s\1\1\8\u\b\c\0\a\9\i\a\r\x\q\s\a\2\h\b\i\g\c\2\2\2\v\z\w\8\o\7\5\r\3\v\k\z\w\r\s ]] 00:05:28.927 20:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:28.927 20:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:28.927 [2024-11-26 20:28:43.441298] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:28.927 [2024-11-26 20:28:43.441670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59741 ] 00:05:29.188 [2024-11-26 20:28:43.588172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.188 [2024-11-26 20:28:43.644055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.188 [2024-11-26 20:28:43.693201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.188  [2024-11-26T20:28:44.005Z] Copying: 512/512 [B] (average 500 kBps) 00:05:29.450 00:05:29.450 20:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f6skzeep7gbqifd1tzorso6u6jsgecjh50dlvis8nbtd5wxbfpr6qo5sfwqm6h63e3tgc04tydhpw02r1t6ry2vgtvp3hqpw4ve3tfqwk5n06hl3xzvq7sj364lh05hj4nnngz1y3nvfpo98y6dd4id9cgwchaut2rc5hkpa5a86eczvedks41vetpmzkcm6g2qaplfcjz99ev0cyegcrt4dvail2y2zaz3jz5zj4n8lpnbjf31l1d06penfsltyjgm9nelqizquyudqphkn8cnur3joeqzs30dfwiroyc2m6pnr4jxecvcafh2jdwa244b8g331whnivqy887ig7ff7guyuepsh4651lh0qszf53xrxnowqvo6p9nes2g43dkvcix4fns6s0y629f1yuxz17smaa3v4c8witj3thpklobkn1l8jgmnovnt2noxoa2y4taicdae5e17oxeflr31s118ubc0a9iarxqsa2hbigc222vzw8o75r3vkzwrs == \f\6\s\k\z\e\e\p\7\g\b\q\i\f\d\1\t\z\o\r\s\o\6\u\6\j\s\g\e\c\j\h\5\0\d\l\v\i\s\8\n\b\t\d\5\w\x\b\f\p\r\6\q\o\5\s\f\w\q\m\6\h\6\3\e\3\t\g\c\0\4\t\y\d\h\p\w\0\2\r\1\t\6\r\y\2\v\g\t\v\p\3\h\q\p\w\4\v\e\3\t\f\q\w\k\5\n\0\6\h\l\3\x\z\v\q\7\s\j\3\6\4\l\h\0\5\h\j\4\n\n\n\g\z\1\y\3\n\v\f\p\o\9\8\y\6\d\d\4\i\d\9\c\g\w\c\h\a\u\t\2\r\c\5\h\k\p\a\5\a\8\6\e\c\z\v\e\d\k\s\4\1\v\e\t\p\m\z\k\c\m\6\g\2\q\a\p\l\f\c\j\z\9\9\e\v\0\c\y\e\g\c\r\t\4\d\v\a\i\l\2\y\2\z\a\z\3\j\z\5\z\j\4\n\8\l\p\n\b\j\f\3\1\l\1\d\0\6\p\e\n\f\s\l\t\y\j\g\m\9\n\e\l\q\i\z\q\u\y\u\d\q\p\h\k\n\8\c\n\u\r\3\j\o\e\q\z\s\3\0\d\f\w\i\r\o\y\c\2\m\6\p\n\r\4\j\x\e\c\v\c\a\f\h\2\j\d\w\a\2\4\4\b\8\g\3\3\1\w\h\n\i\v\q\y\8\8\7\i\g\7\f\f\7\g\u\y\u\e\p\s\h\4\6\5\1\l\h\0\q\s\z\f\5\3\x\r\x\n\o\w\q\v\o\6\p\9\n\e\s\2\g\4\3\d\k\v\c\i\x\4\f\n\s\6\s\0\y\6\2\9\f\1\y\u\x\z\1\7\s\m\a\a\3\v\4\c\8\w\i\t\j\3\t\h\p\k\l\o\b\k\n\1\l\8\j\g\m\n\o\v\n\t\2\n\o\x\o\a\2\y\4\t\a\i\c\d\a\e\5\e\1\7\o\x\e\f\l\r\3\1\s\1\1\8\u\b\c\0\a\9\i\a\r\x\q\s\a\2\h\b\i\g\c\2\2\2\v\z\w\8\o\7\5\r\3\v\k\z\w\r\s ]] 00:05:29.450 20:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:29.450 20:28:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:29.450 [2024-11-26 20:28:43.911123] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:29.450 [2024-11-26 20:28:43.911330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59749 ] 00:05:29.711 [2024-11-26 20:28:44.051015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.711 [2024-11-26 20:28:44.104122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.711 [2024-11-26 20:28:44.151287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.711  [2024-11-26T20:28:44.527Z] Copying: 512/512 [B] (average 83 kBps) 00:05:29.972 00:05:29.972 20:28:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f6skzeep7gbqifd1tzorso6u6jsgecjh50dlvis8nbtd5wxbfpr6qo5sfwqm6h63e3tgc04tydhpw02r1t6ry2vgtvp3hqpw4ve3tfqwk5n06hl3xzvq7sj364lh05hj4nnngz1y3nvfpo98y6dd4id9cgwchaut2rc5hkpa5a86eczvedks41vetpmzkcm6g2qaplfcjz99ev0cyegcrt4dvail2y2zaz3jz5zj4n8lpnbjf31l1d06penfsltyjgm9nelqizquyudqphkn8cnur3joeqzs30dfwiroyc2m6pnr4jxecvcafh2jdwa244b8g331whnivqy887ig7ff7guyuepsh4651lh0qszf53xrxnowqvo6p9nes2g43dkvcix4fns6s0y629f1yuxz17smaa3v4c8witj3thpklobkn1l8jgmnovnt2noxoa2y4taicdae5e17oxeflr31s118ubc0a9iarxqsa2hbigc222vzw8o75r3vkzwrs == \f\6\s\k\z\e\e\p\7\g\b\q\i\f\d\1\t\z\o\r\s\o\6\u\6\j\s\g\e\c\j\h\5\0\d\l\v\i\s\8\n\b\t\d\5\w\x\b\f\p\r\6\q\o\5\s\f\w\q\m\6\h\6\3\e\3\t\g\c\0\4\t\y\d\h\p\w\0\2\r\1\t\6\r\y\2\v\g\t\v\p\3\h\q\p\w\4\v\e\3\t\f\q\w\k\5\n\0\6\h\l\3\x\z\v\q\7\s\j\3\6\4\l\h\0\5\h\j\4\n\n\n\g\z\1\y\3\n\v\f\p\o\9\8\y\6\d\d\4\i\d\9\c\g\w\c\h\a\u\t\2\r\c\5\h\k\p\a\5\a\8\6\e\c\z\v\e\d\k\s\4\1\v\e\t\p\m\z\k\c\m\6\g\2\q\a\p\l\f\c\j\z\9\9\e\v\0\c\y\e\g\c\r\t\4\d\v\a\i\l\2\y\2\z\a\z\3\j\z\5\z\j\4\n\8\l\p\n\b\j\f\3\1\l\1\d\0\6\p\e\n\f\s\l\t\y\j\g\m\9\n\e\l\q\i\z\q\u\y\u\d\q\p\h\k\n\8\c\n\u\r\3\j\o\e\q\z\s\3\0\d\f\w\i\r\o\y\c\2\m\6\p\n\r\4\j\x\e\c\v\c\a\f\h\2\j\d\w\a\2\4\4\b\8\g\3\3\1\w\h\n\i\v\q\y\8\8\7\i\g\7\f\f\7\g\u\y\u\e\p\s\h\4\6\5\1\l\h\0\q\s\z\f\5\3\x\r\x\n\o\w\q\v\o\6\p\9\n\e\s\2\g\4\3\d\k\v\c\i\x\4\f\n\s\6\s\0\y\6\2\9\f\1\y\u\x\z\1\7\s\m\a\a\3\v\4\c\8\w\i\t\j\3\t\h\p\k\l\o\b\k\n\1\l\8\j\g\m\n\o\v\n\t\2\n\o\x\o\a\2\y\4\t\a\i\c\d\a\e\5\e\1\7\o\x\e\f\l\r\3\1\s\1\1\8\u\b\c\0\a\9\i\a\r\x\q\s\a\2\h\b\i\g\c\2\2\2\v\z\w\8\o\7\5\r\3\v\k\z\w\r\s ]] 00:05:29.972 20:28:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:29.972 20:28:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:29.972 [2024-11-26 20:28:44.394357] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:29.972 [2024-11-26 20:28:44.394430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59754 ] 00:05:30.233 [2024-11-26 20:28:44.537838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.233 [2024-11-26 20:28:44.592547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.233 [2024-11-26 20:28:44.641580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.233  [2024-11-26T20:28:45.079Z] Copying: 512/512 [B] (average 166 kBps) 00:05:30.524 00:05:30.524 20:28:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f6skzeep7gbqifd1tzorso6u6jsgecjh50dlvis8nbtd5wxbfpr6qo5sfwqm6h63e3tgc04tydhpw02r1t6ry2vgtvp3hqpw4ve3tfqwk5n06hl3xzvq7sj364lh05hj4nnngz1y3nvfpo98y6dd4id9cgwchaut2rc5hkpa5a86eczvedks41vetpmzkcm6g2qaplfcjz99ev0cyegcrt4dvail2y2zaz3jz5zj4n8lpnbjf31l1d06penfsltyjgm9nelqizquyudqphkn8cnur3joeqzs30dfwiroyc2m6pnr4jxecvcafh2jdwa244b8g331whnivqy887ig7ff7guyuepsh4651lh0qszf53xrxnowqvo6p9nes2g43dkvcix4fns6s0y629f1yuxz17smaa3v4c8witj3thpklobkn1l8jgmnovnt2noxoa2y4taicdae5e17oxeflr31s118ubc0a9iarxqsa2hbigc222vzw8o75r3vkzwrs == \f\6\s\k\z\e\e\p\7\g\b\q\i\f\d\1\t\z\o\r\s\o\6\u\6\j\s\g\e\c\j\h\5\0\d\l\v\i\s\8\n\b\t\d\5\w\x\b\f\p\r\6\q\o\5\s\f\w\q\m\6\h\6\3\e\3\t\g\c\0\4\t\y\d\h\p\w\0\2\r\1\t\6\r\y\2\v\g\t\v\p\3\h\q\p\w\4\v\e\3\t\f\q\w\k\5\n\0\6\h\l\3\x\z\v\q\7\s\j\3\6\4\l\h\0\5\h\j\4\n\n\n\g\z\1\y\3\n\v\f\p\o\9\8\y\6\d\d\4\i\d\9\c\g\w\c\h\a\u\t\2\r\c\5\h\k\p\a\5\a\8\6\e\c\z\v\e\d\k\s\4\1\v\e\t\p\m\z\k\c\m\6\g\2\q\a\p\l\f\c\j\z\9\9\e\v\0\c\y\e\g\c\r\t\4\d\v\a\i\l\2\y\2\z\a\z\3\j\z\5\z\j\4\n\8\l\p\n\b\j\f\3\1\l\1\d\0\6\p\e\n\f\s\l\t\y\j\g\m\9\n\e\l\q\i\z\q\u\y\u\d\q\p\h\k\n\8\c\n\u\r\3\j\o\e\q\z\s\3\0\d\f\w\i\r\o\y\c\2\m\6\p\n\r\4\j\x\e\c\v\c\a\f\h\2\j\d\w\a\2\4\4\b\8\g\3\3\1\w\h\n\i\v\q\y\8\8\7\i\g\7\f\f\7\g\u\y\u\e\p\s\h\4\6\5\1\l\h\0\q\s\z\f\5\3\x\r\x\n\o\w\q\v\o\6\p\9\n\e\s\2\g\4\3\d\k\v\c\i\x\4\f\n\s\6\s\0\y\6\2\9\f\1\y\u\x\z\1\7\s\m\a\a\3\v\4\c\8\w\i\t\j\3\t\h\p\k\l\o\b\k\n\1\l\8\j\g\m\n\o\v\n\t\2\n\o\x\o\a\2\y\4\t\a\i\c\d\a\e\5\e\1\7\o\x\e\f\l\r\3\1\s\1\1\8\u\b\c\0\a\9\i\a\r\x\q\s\a\2\h\b\i\g\c\2\2\2\v\z\w\8\o\7\5\r\3\v\k\z\w\r\s ]] 00:05:30.524 00:05:30.524 real 0m3.843s 00:05:30.524 user 0m2.010s 00:05:30.524 sys 0m1.849s 00:05:30.524 20:28:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.524 ************************************ 00:05:30.524 END TEST dd_flags_misc 00:05:30.525 ************************************ 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:05:30.525 * Second test run, disabling liburing, forcing AIO 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:30.525 ************************************ 00:05:30.525 START TEST dd_flag_append_forced_aio 00:05:30.525 ************************************ 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=kehmyqlu712xziemhnpw3at6llywie7k 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=hpdshykr6qzvj0tc0xdzc83a11rrhedb 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s kehmyqlu712xziemhnpw3at6llywie7k 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s hpdshykr6qzvj0tc0xdzc83a11rrhedb 00:05:30.525 20:28:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:30.525 [2024-11-26 20:28:44.939682] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:30.525 [2024-11-26 20:28:44.939768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59783 ] 00:05:30.787 [2024-11-26 20:28:45.087446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.787 [2024-11-26 20:28:45.139417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.787 [2024-11-26 20:28:45.188330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.787  [2024-11-26T20:28:45.602Z] Copying: 32/32 [B] (average 31 kBps) 00:05:31.047 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ hpdshykr6qzvj0tc0xdzc83a11rrhedbkehmyqlu712xziemhnpw3at6llywie7k == \h\p\d\s\h\y\k\r\6\q\z\v\j\0\t\c\0\x\d\z\c\8\3\a\1\1\r\r\h\e\d\b\k\e\h\m\y\q\l\u\7\1\2\x\z\i\e\m\h\n\p\w\3\a\t\6\l\l\y\w\i\e\7\k ]] 00:05:31.047 00:05:31.047 real 0m0.518s 00:05:31.047 user 0m0.273s 00:05:31.047 sys 0m0.123s 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.047 ************************************ 00:05:31.047 END TEST dd_flag_append_forced_aio 00:05:31.047 ************************************ 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:31.047 ************************************ 00:05:31.047 START TEST dd_flag_directory_forced_aio 00:05:31.047 ************************************ 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:31.047 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:31.047 [2024-11-26 20:28:45.508032] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:31.047 [2024-11-26 20:28:45.508701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59815 ] 00:05:31.308 [2024-11-26 20:28:45.651500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.308 [2024-11-26 20:28:45.708442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.308 [2024-11-26 20:28:45.760561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.308 [2024-11-26 20:28:45.798164] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:31.308 [2024-11-26 20:28:45.798219] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:31.308 [2024-11-26 20:28:45.798231] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.570 [2024-11-26 20:28:45.882853] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:31.570 20:28:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:31.570 [2024-11-26 20:28:45.971939] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:31.570 [2024-11-26 20:28:45.972130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59819 ] 00:05:31.570 [2024-11-26 20:28:46.114381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.850 [2024-11-26 20:28:46.158120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.850 [2024-11-26 20:28:46.195453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.850 [2024-11-26 20:28:46.226844] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:31.850 [2024-11-26 20:28:46.226889] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:31.850 [2024-11-26 20:28:46.226901] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.850 [2024-11-26 20:28:46.308504] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.850 00:05:31.850 real 0m0.893s 00:05:31.850 user 0m0.458s 00:05:31.850 sys 0m0.220s 00:05:31.850 ************************************ 00:05:31.850 END TEST dd_flag_directory_forced_aio 00:05:31.850 ************************************ 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.850 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:32.110 ************************************ 00:05:32.110 START TEST dd_flag_nofollow_forced_aio 00:05:32.110 ************************************ 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:32.110 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.110 [2024-11-26 20:28:46.478287] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:32.110 [2024-11-26 20:28:46.478359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:05:32.110 [2024-11-26 20:28:46.619356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.371 [2024-11-26 20:28:46.665486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.371 [2024-11-26 20:28:46.704884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.371 [2024-11-26 20:28:46.737922] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:32.371 [2024-11-26 20:28:46.737966] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:32.371 [2024-11-26 20:28:46.737978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.371 [2024-11-26 20:28:46.819820] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.371 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.372 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.372 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.372 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.372 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.372 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.372 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:32.372 20:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:32.372 [2024-11-26 20:28:46.915008] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:32.372 [2024-11-26 20:28:46.915238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59858 ] 00:05:32.632 [2024-11-26 20:28:47.053430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.632 [2024-11-26 20:28:47.098160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.632 [2024-11-26 20:28:47.137031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.632 [2024-11-26 20:28:47.168509] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:32.632 [2024-11-26 20:28:47.168556] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:32.632 [2024-11-26 20:28:47.168568] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.894 [2024-11-26 20:28:47.243599] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:32.894 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.894 [2024-11-26 20:28:47.345734] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:32.894 [2024-11-26 20:28:47.345811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59865 ] 00:05:33.155 [2024-11-26 20:28:47.485162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.155 [2024-11-26 20:28:47.531473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.155 [2024-11-26 20:28:47.572760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.155  [2024-11-26T20:28:47.971Z] Copying: 512/512 [B] (average 500 kBps) 00:05:33.416 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 56px8m33f08s5ndmzsv24512rgmcdcx2yp3bzf3l10qmagu6c29slr23x1het1rbltkj6ilkuh58sgx2vsngjv44xucxi1n6888jb9rhpm3ejsyxvxwzc2ylde58d9r5a1y0jr54hdxgmwqhwcjbphs83pkpu7oxo6gu5uw5p7fhiv5ogbo5289ajp53fafu48h09j91uol6xg7hqf3slgw2i6xqon8jjg9a6ka5381iyjsg6ka3euciw3ny5cvns5iqfoouu633rwikn5f3gm4ajqb11pljnpuld6imkchrbeyw9rb4u5ey5hu6x9y7vh4dotludxsypik8ntgkp17tzrbb9lo4s9wasco0ueuaqg6brz3s4ov6yhptcodrpl4xqt1e96d4dhswkdzrtuj838pe28i24mprnxw5u9oojrta9v6nix63fabfoimec3c7ezbn2125fojguxvo7zmwscw1cfb5mwsrfjyhygcwr0t0o02qgub4fdyn10l9 == \5\6\p\x\8\m\3\3\f\0\8\s\5\n\d\m\z\s\v\2\4\5\1\2\r\g\m\c\d\c\x\2\y\p\3\b\z\f\3\l\1\0\q\m\a\g\u\6\c\2\9\s\l\r\2\3\x\1\h\e\t\1\r\b\l\t\k\j\6\i\l\k\u\h\5\8\s\g\x\2\v\s\n\g\j\v\4\4\x\u\c\x\i\1\n\6\8\8\8\j\b\9\r\h\p\m\3\e\j\s\y\x\v\x\w\z\c\2\y\l\d\e\5\8\d\9\r\5\a\1\y\0\j\r\5\4\h\d\x\g\m\w\q\h\w\c\j\b\p\h\s\8\3\p\k\p\u\7\o\x\o\6\g\u\5\u\w\5\p\7\f\h\i\v\5\o\g\b\o\5\2\8\9\a\j\p\5\3\f\a\f\u\4\8\h\0\9\j\9\1\u\o\l\6\x\g\7\h\q\f\3\s\l\g\w\2\i\6\x\q\o\n\8\j\j\g\9\a\6\k\a\5\3\8\1\i\y\j\s\g\6\k\a\3\e\u\c\i\w\3\n\y\5\c\v\n\s\5\i\q\f\o\o\u\u\6\3\3\r\w\i\k\n\5\f\3\g\m\4\a\j\q\b\1\1\p\l\j\n\p\u\l\d\6\i\m\k\c\h\r\b\e\y\w\9\r\b\4\u\5\e\y\5\h\u\6\x\9\y\7\v\h\4\d\o\t\l\u\d\x\s\y\p\i\k\8\n\t\g\k\p\1\7\t\z\r\b\b\9\l\o\4\s\9\w\a\s\c\o\0\u\e\u\a\q\g\6\b\r\z\3\s\4\o\v\6\y\h\p\t\c\o\d\r\p\l\4\x\q\t\1\e\9\6\d\4\d\h\s\w\k\d\z\r\t\u\j\8\3\8\p\e\2\8\i\2\4\m\p\r\n\x\w\5\u\9\o\o\j\r\t\a\9\v\6\n\i\x\6\3\f\a\b\f\o\i\m\e\c\3\c\7\e\z\b\n\2\1\2\5\f\o\j\g\u\x\v\o\7\z\m\w\s\c\w\1\c\f\b\5\m\w\s\r\f\j\y\h\y\g\c\w\r\0\t\0\o\0\2\q\g\u\b\4\f\d\y\n\1\0\l\9 ]] 00:05:33.416 00:05:33.416 real 0m1.328s 00:05:33.416 user 0m0.684s 00:05:33.416 sys 0m0.310s 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:33.416 ************************************ 00:05:33.416 END TEST dd_flag_nofollow_forced_aio 00:05:33.416 ************************************ 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:33.416 ************************************ 00:05:33.416 START TEST dd_flag_noatime_forced_aio 00:05:33.416 ************************************ 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732652927 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732652927 00:05:33.416 20:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:05:34.359 20:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.359 [2024-11-26 20:28:48.899063] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:34.359 [2024-11-26 20:28:48.899363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:05:34.619 [2024-11-26 20:28:49.046642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.619 [2024-11-26 20:28:49.091863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.619 [2024-11-26 20:28:49.131263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.619  [2024-11-26T20:28:49.436Z] Copying: 512/512 [B] (average 500 kBps) 00:05:34.881 00:05:34.881 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:34.881 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732652927 )) 00:05:34.881 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.881 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732652927 )) 00:05:34.881 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.881 [2024-11-26 20:28:49.360114] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:34.881 [2024-11-26 20:28:49.360437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59917 ] 00:05:35.186 [2024-11-26 20:28:49.505231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.186 [2024-11-26 20:28:49.545918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.186 [2024-11-26 20:28:49.580813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.186  [2024-11-26T20:28:50.023Z] Copying: 512/512 [B] (average 500 kBps) 00:05:35.468 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:35.468 ************************************ 00:05:35.468 END TEST dd_flag_noatime_forced_aio 00:05:35.468 ************************************ 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732652929 )) 00:05:35.468 00:05:35.468 real 0m1.924s 00:05:35.468 user 0m0.453s 00:05:35.468 sys 0m0.223s 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:35.468 ************************************ 00:05:35.468 START TEST dd_flags_misc_forced_aio 00:05:35.468 ************************************ 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:35.468 20:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:35.468 [2024-11-26 20:28:49.865930] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:35.468 [2024-11-26 20:28:49.866003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59944 ] 00:05:35.468 [2024-11-26 20:28:50.004333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.729 [2024-11-26 20:28:50.048417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.730 [2024-11-26 20:28:50.088165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.730  [2024-11-26T20:28:50.285Z] Copying: 512/512 [B] (average 500 kBps) 00:05:35.730 00:05:35.730 20:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qx0qi5gvautusj576tnez4l3i67eli3dxnomrehpq8onjapo5ppzzx7jb72mi6x1h9k8di03opejvdhxr4yj8pp51g8htroc74210i09wz7wh0rwwv7p2qxhpg8ysk7hjj09tizt60pkdw0tgv30cbg5eclxdhvecx6s9m6c975y79nzah9ylqea6n91mxau6ikmqq20g145acj0ctqvg4s3t2grc3t5y3440t2lvz7qotzbrq3myr0xjvf6lwbez9eqpcehoaidtw66qik39qs0o9nc1cp8v0984ikc9qtxna7fc4d4xlsohruygylfk732re9ewisk4w3zf2den57i3hw5s2zgj4kitpjf5qxzgqq5zqamekvjsddem9r1ydgi3n0xoh3fwl74y0no8mw360s09bw0vsqp4sciznwyddxires124gcjbyaps6ildsycltgf9c3egfldofj0imiozrcjq8f1za9uu62gfsiez8nboj3ozd7vsvwliiu == \q\x\0\q\i\5\g\v\a\u\t\u\s\j\5\7\6\t\n\e\z\4\l\3\i\6\7\e\l\i\3\d\x\n\o\m\r\e\h\p\q\8\o\n\j\a\p\o\5\p\p\z\z\x\7\j\b\7\2\m\i\6\x\1\h\9\k\8\d\i\0\3\o\p\e\j\v\d\h\x\r\4\y\j\8\p\p\5\1\g\8\h\t\r\o\c\7\4\2\1\0\i\0\9\w\z\7\w\h\0\r\w\w\v\7\p\2\q\x\h\p\g\8\y\s\k\7\h\j\j\0\9\t\i\z\t\6\0\p\k\d\w\0\t\g\v\3\0\c\b\g\5\e\c\l\x\d\h\v\e\c\x\6\s\9\m\6\c\9\7\5\y\7\9\n\z\a\h\9\y\l\q\e\a\6\n\9\1\m\x\a\u\6\i\k\m\q\q\2\0\g\1\4\5\a\c\j\0\c\t\q\v\g\4\s\3\t\2\g\r\c\3\t\5\y\3\4\4\0\t\2\l\v\z\7\q\o\t\z\b\r\q\3\m\y\r\0\x\j\v\f\6\l\w\b\e\z\9\e\q\p\c\e\h\o\a\i\d\t\w\6\6\q\i\k\3\9\q\s\0\o\9\n\c\1\c\p\8\v\0\9\8\4\i\k\c\9\q\t\x\n\a\7\f\c\4\d\4\x\l\s\o\h\r\u\y\g\y\l\f\k\7\3\2\r\e\9\e\w\i\s\k\4\w\3\z\f\2\d\e\n\5\7\i\3\h\w\5\s\2\z\g\j\4\k\i\t\p\j\f\5\q\x\z\g\q\q\5\z\q\a\m\e\k\v\j\s\d\d\e\m\9\r\1\y\d\g\i\3\n\0\x\o\h\3\f\w\l\7\4\y\0\n\o\8\m\w\3\6\0\s\0\9\b\w\0\v\s\q\p\4\s\c\i\z\n\w\y\d\d\x\i\r\e\s\1\2\4\g\c\j\b\y\a\p\s\6\i\l\d\s\y\c\l\t\g\f\9\c\3\e\g\f\l\d\o\f\j\0\i\m\i\o\z\r\c\j\q\8\f\1\z\a\9\u\u\6\2\g\f\s\i\e\z\8\n\b\o\j\3\o\z\d\7\v\s\v\w\l\i\i\u ]] 00:05:35.730 20:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:35.730 20:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:35.990 [2024-11-26 20:28:50.313400] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:35.990 [2024-11-26 20:28:50.313476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:05:35.990 [2024-11-26 20:28:50.456326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.990 [2024-11-26 20:28:50.503105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.251 [2024-11-26 20:28:50.543959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.251  [2024-11-26T20:28:50.806Z] Copying: 512/512 [B] (average 500 kBps) 00:05:36.251 00:05:36.251 20:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qx0qi5gvautusj576tnez4l3i67eli3dxnomrehpq8onjapo5ppzzx7jb72mi6x1h9k8di03opejvdhxr4yj8pp51g8htroc74210i09wz7wh0rwwv7p2qxhpg8ysk7hjj09tizt60pkdw0tgv30cbg5eclxdhvecx6s9m6c975y79nzah9ylqea6n91mxau6ikmqq20g145acj0ctqvg4s3t2grc3t5y3440t2lvz7qotzbrq3myr0xjvf6lwbez9eqpcehoaidtw66qik39qs0o9nc1cp8v0984ikc9qtxna7fc4d4xlsohruygylfk732re9ewisk4w3zf2den57i3hw5s2zgj4kitpjf5qxzgqq5zqamekvjsddem9r1ydgi3n0xoh3fwl74y0no8mw360s09bw0vsqp4sciznwyddxires124gcjbyaps6ildsycltgf9c3egfldofj0imiozrcjq8f1za9uu62gfsiez8nboj3ozd7vsvwliiu == \q\x\0\q\i\5\g\v\a\u\t\u\s\j\5\7\6\t\n\e\z\4\l\3\i\6\7\e\l\i\3\d\x\n\o\m\r\e\h\p\q\8\o\n\j\a\p\o\5\p\p\z\z\x\7\j\b\7\2\m\i\6\x\1\h\9\k\8\d\i\0\3\o\p\e\j\v\d\h\x\r\4\y\j\8\p\p\5\1\g\8\h\t\r\o\c\7\4\2\1\0\i\0\9\w\z\7\w\h\0\r\w\w\v\7\p\2\q\x\h\p\g\8\y\s\k\7\h\j\j\0\9\t\i\z\t\6\0\p\k\d\w\0\t\g\v\3\0\c\b\g\5\e\c\l\x\d\h\v\e\c\x\6\s\9\m\6\c\9\7\5\y\7\9\n\z\a\h\9\y\l\q\e\a\6\n\9\1\m\x\a\u\6\i\k\m\q\q\2\0\g\1\4\5\a\c\j\0\c\t\q\v\g\4\s\3\t\2\g\r\c\3\t\5\y\3\4\4\0\t\2\l\v\z\7\q\o\t\z\b\r\q\3\m\y\r\0\x\j\v\f\6\l\w\b\e\z\9\e\q\p\c\e\h\o\a\i\d\t\w\6\6\q\i\k\3\9\q\s\0\o\9\n\c\1\c\p\8\v\0\9\8\4\i\k\c\9\q\t\x\n\a\7\f\c\4\d\4\x\l\s\o\h\r\u\y\g\y\l\f\k\7\3\2\r\e\9\e\w\i\s\k\4\w\3\z\f\2\d\e\n\5\7\i\3\h\w\5\s\2\z\g\j\4\k\i\t\p\j\f\5\q\x\z\g\q\q\5\z\q\a\m\e\k\v\j\s\d\d\e\m\9\r\1\y\d\g\i\3\n\0\x\o\h\3\f\w\l\7\4\y\0\n\o\8\m\w\3\6\0\s\0\9\b\w\0\v\s\q\p\4\s\c\i\z\n\w\y\d\d\x\i\r\e\s\1\2\4\g\c\j\b\y\a\p\s\6\i\l\d\s\y\c\l\t\g\f\9\c\3\e\g\f\l\d\o\f\j\0\i\m\i\o\z\r\c\j\q\8\f\1\z\a\9\u\u\6\2\g\f\s\i\e\z\8\n\b\o\j\3\o\z\d\7\v\s\v\w\l\i\i\u ]] 00:05:36.251 20:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:36.251 20:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:36.251 [2024-11-26 20:28:50.760623] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:36.251 [2024-11-26 20:28:50.760693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59959 ] 00:05:36.512 [2024-11-26 20:28:50.902055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.512 [2024-11-26 20:28:50.945073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.512 [2024-11-26 20:28:50.981420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.512  [2024-11-26T20:28:51.328Z] Copying: 512/512 [B] (average 71 kBps) 00:05:36.773 00:05:36.773 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qx0qi5gvautusj576tnez4l3i67eli3dxnomrehpq8onjapo5ppzzx7jb72mi6x1h9k8di03opejvdhxr4yj8pp51g8htroc74210i09wz7wh0rwwv7p2qxhpg8ysk7hjj09tizt60pkdw0tgv30cbg5eclxdhvecx6s9m6c975y79nzah9ylqea6n91mxau6ikmqq20g145acj0ctqvg4s3t2grc3t5y3440t2lvz7qotzbrq3myr0xjvf6lwbez9eqpcehoaidtw66qik39qs0o9nc1cp8v0984ikc9qtxna7fc4d4xlsohruygylfk732re9ewisk4w3zf2den57i3hw5s2zgj4kitpjf5qxzgqq5zqamekvjsddem9r1ydgi3n0xoh3fwl74y0no8mw360s09bw0vsqp4sciznwyddxires124gcjbyaps6ildsycltgf9c3egfldofj0imiozrcjq8f1za9uu62gfsiez8nboj3ozd7vsvwliiu == \q\x\0\q\i\5\g\v\a\u\t\u\s\j\5\7\6\t\n\e\z\4\l\3\i\6\7\e\l\i\3\d\x\n\o\m\r\e\h\p\q\8\o\n\j\a\p\o\5\p\p\z\z\x\7\j\b\7\2\m\i\6\x\1\h\9\k\8\d\i\0\3\o\p\e\j\v\d\h\x\r\4\y\j\8\p\p\5\1\g\8\h\t\r\o\c\7\4\2\1\0\i\0\9\w\z\7\w\h\0\r\w\w\v\7\p\2\q\x\h\p\g\8\y\s\k\7\h\j\j\0\9\t\i\z\t\6\0\p\k\d\w\0\t\g\v\3\0\c\b\g\5\e\c\l\x\d\h\v\e\c\x\6\s\9\m\6\c\9\7\5\y\7\9\n\z\a\h\9\y\l\q\e\a\6\n\9\1\m\x\a\u\6\i\k\m\q\q\2\0\g\1\4\5\a\c\j\0\c\t\q\v\g\4\s\3\t\2\g\r\c\3\t\5\y\3\4\4\0\t\2\l\v\z\7\q\o\t\z\b\r\q\3\m\y\r\0\x\j\v\f\6\l\w\b\e\z\9\e\q\p\c\e\h\o\a\i\d\t\w\6\6\q\i\k\3\9\q\s\0\o\9\n\c\1\c\p\8\v\0\9\8\4\i\k\c\9\q\t\x\n\a\7\f\c\4\d\4\x\l\s\o\h\r\u\y\g\y\l\f\k\7\3\2\r\e\9\e\w\i\s\k\4\w\3\z\f\2\d\e\n\5\7\i\3\h\w\5\s\2\z\g\j\4\k\i\t\p\j\f\5\q\x\z\g\q\q\5\z\q\a\m\e\k\v\j\s\d\d\e\m\9\r\1\y\d\g\i\3\n\0\x\o\h\3\f\w\l\7\4\y\0\n\o\8\m\w\3\6\0\s\0\9\b\w\0\v\s\q\p\4\s\c\i\z\n\w\y\d\d\x\i\r\e\s\1\2\4\g\c\j\b\y\a\p\s\6\i\l\d\s\y\c\l\t\g\f\9\c\3\e\g\f\l\d\o\f\j\0\i\m\i\o\z\r\c\j\q\8\f\1\z\a\9\u\u\6\2\g\f\s\i\e\z\8\n\b\o\j\3\o\z\d\7\v\s\v\w\l\i\i\u ]] 00:05:36.773 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:36.773 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:36.773 [2024-11-26 20:28:51.199981] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:36.773 [2024-11-26 20:28:51.200045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59961 ] 00:05:37.034 [2024-11-26 20:28:51.339743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.034 [2024-11-26 20:28:51.386917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.034 [2024-11-26 20:28:51.426498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.034  [2024-11-26T20:28:51.852Z] Copying: 512/512 [B] (average 166 kBps) 00:05:37.297 00:05:37.297 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qx0qi5gvautusj576tnez4l3i67eli3dxnomrehpq8onjapo5ppzzx7jb72mi6x1h9k8di03opejvdhxr4yj8pp51g8htroc74210i09wz7wh0rwwv7p2qxhpg8ysk7hjj09tizt60pkdw0tgv30cbg5eclxdhvecx6s9m6c975y79nzah9ylqea6n91mxau6ikmqq20g145acj0ctqvg4s3t2grc3t5y3440t2lvz7qotzbrq3myr0xjvf6lwbez9eqpcehoaidtw66qik39qs0o9nc1cp8v0984ikc9qtxna7fc4d4xlsohruygylfk732re9ewisk4w3zf2den57i3hw5s2zgj4kitpjf5qxzgqq5zqamekvjsddem9r1ydgi3n0xoh3fwl74y0no8mw360s09bw0vsqp4sciznwyddxires124gcjbyaps6ildsycltgf9c3egfldofj0imiozrcjq8f1za9uu62gfsiez8nboj3ozd7vsvwliiu == \q\x\0\q\i\5\g\v\a\u\t\u\s\j\5\7\6\t\n\e\z\4\l\3\i\6\7\e\l\i\3\d\x\n\o\m\r\e\h\p\q\8\o\n\j\a\p\o\5\p\p\z\z\x\7\j\b\7\2\m\i\6\x\1\h\9\k\8\d\i\0\3\o\p\e\j\v\d\h\x\r\4\y\j\8\p\p\5\1\g\8\h\t\r\o\c\7\4\2\1\0\i\0\9\w\z\7\w\h\0\r\w\w\v\7\p\2\q\x\h\p\g\8\y\s\k\7\h\j\j\0\9\t\i\z\t\6\0\p\k\d\w\0\t\g\v\3\0\c\b\g\5\e\c\l\x\d\h\v\e\c\x\6\s\9\m\6\c\9\7\5\y\7\9\n\z\a\h\9\y\l\q\e\a\6\n\9\1\m\x\a\u\6\i\k\m\q\q\2\0\g\1\4\5\a\c\j\0\c\t\q\v\g\4\s\3\t\2\g\r\c\3\t\5\y\3\4\4\0\t\2\l\v\z\7\q\o\t\z\b\r\q\3\m\y\r\0\x\j\v\f\6\l\w\b\e\z\9\e\q\p\c\e\h\o\a\i\d\t\w\6\6\q\i\k\3\9\q\s\0\o\9\n\c\1\c\p\8\v\0\9\8\4\i\k\c\9\q\t\x\n\a\7\f\c\4\d\4\x\l\s\o\h\r\u\y\g\y\l\f\k\7\3\2\r\e\9\e\w\i\s\k\4\w\3\z\f\2\d\e\n\5\7\i\3\h\w\5\s\2\z\g\j\4\k\i\t\p\j\f\5\q\x\z\g\q\q\5\z\q\a\m\e\k\v\j\s\d\d\e\m\9\r\1\y\d\g\i\3\n\0\x\o\h\3\f\w\l\7\4\y\0\n\o\8\m\w\3\6\0\s\0\9\b\w\0\v\s\q\p\4\s\c\i\z\n\w\y\d\d\x\i\r\e\s\1\2\4\g\c\j\b\y\a\p\s\6\i\l\d\s\y\c\l\t\g\f\9\c\3\e\g\f\l\d\o\f\j\0\i\m\i\o\z\r\c\j\q\8\f\1\z\a\9\u\u\6\2\g\f\s\i\e\z\8\n\b\o\j\3\o\z\d\7\v\s\v\w\l\i\i\u ]] 00:05:37.297 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:37.297 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:37.297 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:37.297 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:37.297 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.297 20:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:37.297 [2024-11-26 20:28:51.659228] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:37.297 [2024-11-26 20:28:51.659462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:05:37.297 [2024-11-26 20:28:51.801011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.297 [2024-11-26 20:28:51.845463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.559 [2024-11-26 20:28:51.883126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.559  [2024-11-26T20:28:52.114Z] Copying: 512/512 [B] (average 500 kBps) 00:05:37.559 00:05:37.559 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tpx8rcy2ciyclr0b87zcpn88pr0gb5hq7ho9fs1yk7ppoe8zokxg8xpfi3myo085n0ih0189xhi9f08bk93rzd1zn75orrcsq6ubt72axjdyi179dncw21pvo7jtlq2c41g60xz0s9jguq8bd1e10zrl4qphtuebksvi79lvztk43y7u5lkyhkm9axyfgceqcvlw3292h20927onb16nrwu4ifw0te268god9ghhhwjk6d6pwl1x5zo82hvvsbeqsir8shu8cqny53vpszn5ggt1iouhuszkxjdu0k3tbjswrzg8xh3d6qs9rczo3sdi57rul3ihacvyl2hu7oumpsz4gi6nw0varz0oypboa6czz2s5gkwj5y0gn060uusekqk4y46y8s26svyznhdpl4z0gmolz1zacvwnm3qjifyi1uv2pmslnczkeztqyll2mfastutzrrvj3ei1mnhe9r2s9wgd4vug7xqk9ekgox1mktbt0bzb3w2qz2q8dqb4 == \t\p\x\8\r\c\y\2\c\i\y\c\l\r\0\b\8\7\z\c\p\n\8\8\p\r\0\g\b\5\h\q\7\h\o\9\f\s\1\y\k\7\p\p\o\e\8\z\o\k\x\g\8\x\p\f\i\3\m\y\o\0\8\5\n\0\i\h\0\1\8\9\x\h\i\9\f\0\8\b\k\9\3\r\z\d\1\z\n\7\5\o\r\r\c\s\q\6\u\b\t\7\2\a\x\j\d\y\i\1\7\9\d\n\c\w\2\1\p\v\o\7\j\t\l\q\2\c\4\1\g\6\0\x\z\0\s\9\j\g\u\q\8\b\d\1\e\1\0\z\r\l\4\q\p\h\t\u\e\b\k\s\v\i\7\9\l\v\z\t\k\4\3\y\7\u\5\l\k\y\h\k\m\9\a\x\y\f\g\c\e\q\c\v\l\w\3\2\9\2\h\2\0\9\2\7\o\n\b\1\6\n\r\w\u\4\i\f\w\0\t\e\2\6\8\g\o\d\9\g\h\h\h\w\j\k\6\d\6\p\w\l\1\x\5\z\o\8\2\h\v\v\s\b\e\q\s\i\r\8\s\h\u\8\c\q\n\y\5\3\v\p\s\z\n\5\g\g\t\1\i\o\u\h\u\s\z\k\x\j\d\u\0\k\3\t\b\j\s\w\r\z\g\8\x\h\3\d\6\q\s\9\r\c\z\o\3\s\d\i\5\7\r\u\l\3\i\h\a\c\v\y\l\2\h\u\7\o\u\m\p\s\z\4\g\i\6\n\w\0\v\a\r\z\0\o\y\p\b\o\a\6\c\z\z\2\s\5\g\k\w\j\5\y\0\g\n\0\6\0\u\u\s\e\k\q\k\4\y\4\6\y\8\s\2\6\s\v\y\z\n\h\d\p\l\4\z\0\g\m\o\l\z\1\z\a\c\v\w\n\m\3\q\j\i\f\y\i\1\u\v\2\p\m\s\l\n\c\z\k\e\z\t\q\y\l\l\2\m\f\a\s\t\u\t\z\r\r\v\j\3\e\i\1\m\n\h\e\9\r\2\s\9\w\g\d\4\v\u\g\7\x\q\k\9\e\k\g\o\x\1\m\k\t\b\t\0\b\z\b\3\w\2\q\z\2\q\8\d\q\b\4 ]] 00:05:37.559 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.559 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:37.559 [2024-11-26 20:28:52.091329] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:37.559 [2024-11-26 20:28:52.091408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59976 ] 00:05:37.820 [2024-11-26 20:28:52.234310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.820 [2024-11-26 20:28:52.276075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.820 [2024-11-26 20:28:52.311663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.820  [2024-11-26T20:28:52.636Z] Copying: 512/512 [B] (average 500 kBps) 00:05:38.081 00:05:38.081 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tpx8rcy2ciyclr0b87zcpn88pr0gb5hq7ho9fs1yk7ppoe8zokxg8xpfi3myo085n0ih0189xhi9f08bk93rzd1zn75orrcsq6ubt72axjdyi179dncw21pvo7jtlq2c41g60xz0s9jguq8bd1e10zrl4qphtuebksvi79lvztk43y7u5lkyhkm9axyfgceqcvlw3292h20927onb16nrwu4ifw0te268god9ghhhwjk6d6pwl1x5zo82hvvsbeqsir8shu8cqny53vpszn5ggt1iouhuszkxjdu0k3tbjswrzg8xh3d6qs9rczo3sdi57rul3ihacvyl2hu7oumpsz4gi6nw0varz0oypboa6czz2s5gkwj5y0gn060uusekqk4y46y8s26svyznhdpl4z0gmolz1zacvwnm3qjifyi1uv2pmslnczkeztqyll2mfastutzrrvj3ei1mnhe9r2s9wgd4vug7xqk9ekgox1mktbt0bzb3w2qz2q8dqb4 == \t\p\x\8\r\c\y\2\c\i\y\c\l\r\0\b\8\7\z\c\p\n\8\8\p\r\0\g\b\5\h\q\7\h\o\9\f\s\1\y\k\7\p\p\o\e\8\z\o\k\x\g\8\x\p\f\i\3\m\y\o\0\8\5\n\0\i\h\0\1\8\9\x\h\i\9\f\0\8\b\k\9\3\r\z\d\1\z\n\7\5\o\r\r\c\s\q\6\u\b\t\7\2\a\x\j\d\y\i\1\7\9\d\n\c\w\2\1\p\v\o\7\j\t\l\q\2\c\4\1\g\6\0\x\z\0\s\9\j\g\u\q\8\b\d\1\e\1\0\z\r\l\4\q\p\h\t\u\e\b\k\s\v\i\7\9\l\v\z\t\k\4\3\y\7\u\5\l\k\y\h\k\m\9\a\x\y\f\g\c\e\q\c\v\l\w\3\2\9\2\h\2\0\9\2\7\o\n\b\1\6\n\r\w\u\4\i\f\w\0\t\e\2\6\8\g\o\d\9\g\h\h\h\w\j\k\6\d\6\p\w\l\1\x\5\z\o\8\2\h\v\v\s\b\e\q\s\i\r\8\s\h\u\8\c\q\n\y\5\3\v\p\s\z\n\5\g\g\t\1\i\o\u\h\u\s\z\k\x\j\d\u\0\k\3\t\b\j\s\w\r\z\g\8\x\h\3\d\6\q\s\9\r\c\z\o\3\s\d\i\5\7\r\u\l\3\i\h\a\c\v\y\l\2\h\u\7\o\u\m\p\s\z\4\g\i\6\n\w\0\v\a\r\z\0\o\y\p\b\o\a\6\c\z\z\2\s\5\g\k\w\j\5\y\0\g\n\0\6\0\u\u\s\e\k\q\k\4\y\4\6\y\8\s\2\6\s\v\y\z\n\h\d\p\l\4\z\0\g\m\o\l\z\1\z\a\c\v\w\n\m\3\q\j\i\f\y\i\1\u\v\2\p\m\s\l\n\c\z\k\e\z\t\q\y\l\l\2\m\f\a\s\t\u\t\z\r\r\v\j\3\e\i\1\m\n\h\e\9\r\2\s\9\w\g\d\4\v\u\g\7\x\q\k\9\e\k\g\o\x\1\m\k\t\b\t\0\b\z\b\3\w\2\q\z\2\q\8\d\q\b\4 ]] 00:05:38.081 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:38.082 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:38.082 [2024-11-26 20:28:52.515789] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:38.082 [2024-11-26 20:28:52.516005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59983 ] 00:05:38.343 [2024-11-26 20:28:52.655227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.343 [2024-11-26 20:28:52.694525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.343 [2024-11-26 20:28:52.729242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.343  [2024-11-26T20:28:52.898Z] Copying: 512/512 [B] (average 166 kBps) 00:05:38.343 00:05:38.607 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tpx8rcy2ciyclr0b87zcpn88pr0gb5hq7ho9fs1yk7ppoe8zokxg8xpfi3myo085n0ih0189xhi9f08bk93rzd1zn75orrcsq6ubt72axjdyi179dncw21pvo7jtlq2c41g60xz0s9jguq8bd1e10zrl4qphtuebksvi79lvztk43y7u5lkyhkm9axyfgceqcvlw3292h20927onb16nrwu4ifw0te268god9ghhhwjk6d6pwl1x5zo82hvvsbeqsir8shu8cqny53vpszn5ggt1iouhuszkxjdu0k3tbjswrzg8xh3d6qs9rczo3sdi57rul3ihacvyl2hu7oumpsz4gi6nw0varz0oypboa6czz2s5gkwj5y0gn060uusekqk4y46y8s26svyznhdpl4z0gmolz1zacvwnm3qjifyi1uv2pmslnczkeztqyll2mfastutzrrvj3ei1mnhe9r2s9wgd4vug7xqk9ekgox1mktbt0bzb3w2qz2q8dqb4 == \t\p\x\8\r\c\y\2\c\i\y\c\l\r\0\b\8\7\z\c\p\n\8\8\p\r\0\g\b\5\h\q\7\h\o\9\f\s\1\y\k\7\p\p\o\e\8\z\o\k\x\g\8\x\p\f\i\3\m\y\o\0\8\5\n\0\i\h\0\1\8\9\x\h\i\9\f\0\8\b\k\9\3\r\z\d\1\z\n\7\5\o\r\r\c\s\q\6\u\b\t\7\2\a\x\j\d\y\i\1\7\9\d\n\c\w\2\1\p\v\o\7\j\t\l\q\2\c\4\1\g\6\0\x\z\0\s\9\j\g\u\q\8\b\d\1\e\1\0\z\r\l\4\q\p\h\t\u\e\b\k\s\v\i\7\9\l\v\z\t\k\4\3\y\7\u\5\l\k\y\h\k\m\9\a\x\y\f\g\c\e\q\c\v\l\w\3\2\9\2\h\2\0\9\2\7\o\n\b\1\6\n\r\w\u\4\i\f\w\0\t\e\2\6\8\g\o\d\9\g\h\h\h\w\j\k\6\d\6\p\w\l\1\x\5\z\o\8\2\h\v\v\s\b\e\q\s\i\r\8\s\h\u\8\c\q\n\y\5\3\v\p\s\z\n\5\g\g\t\1\i\o\u\h\u\s\z\k\x\j\d\u\0\k\3\t\b\j\s\w\r\z\g\8\x\h\3\d\6\q\s\9\r\c\z\o\3\s\d\i\5\7\r\u\l\3\i\h\a\c\v\y\l\2\h\u\7\o\u\m\p\s\z\4\g\i\6\n\w\0\v\a\r\z\0\o\y\p\b\o\a\6\c\z\z\2\s\5\g\k\w\j\5\y\0\g\n\0\6\0\u\u\s\e\k\q\k\4\y\4\6\y\8\s\2\6\s\v\y\z\n\h\d\p\l\4\z\0\g\m\o\l\z\1\z\a\c\v\w\n\m\3\q\j\i\f\y\i\1\u\v\2\p\m\s\l\n\c\z\k\e\z\t\q\y\l\l\2\m\f\a\s\t\u\t\z\r\r\v\j\3\e\i\1\m\n\h\e\9\r\2\s\9\w\g\d\4\v\u\g\7\x\q\k\9\e\k\g\o\x\1\m\k\t\b\t\0\b\z\b\3\w\2\q\z\2\q\8\d\q\b\4 ]] 00:05:38.607 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:38.607 20:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:38.607 [2024-11-26 20:28:52.933822] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:38.607 [2024-11-26 20:28:52.933889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:05:38.607 [2024-11-26 20:28:53.075737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.607 [2024-11-26 20:28:53.115274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.607 [2024-11-26 20:28:53.149226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.870  [2024-11-26T20:28:53.425Z] Copying: 512/512 [B] (average 125 kBps) 00:05:38.870 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tpx8rcy2ciyclr0b87zcpn88pr0gb5hq7ho9fs1yk7ppoe8zokxg8xpfi3myo085n0ih0189xhi9f08bk93rzd1zn75orrcsq6ubt72axjdyi179dncw21pvo7jtlq2c41g60xz0s9jguq8bd1e10zrl4qphtuebksvi79lvztk43y7u5lkyhkm9axyfgceqcvlw3292h20927onb16nrwu4ifw0te268god9ghhhwjk6d6pwl1x5zo82hvvsbeqsir8shu8cqny53vpszn5ggt1iouhuszkxjdu0k3tbjswrzg8xh3d6qs9rczo3sdi57rul3ihacvyl2hu7oumpsz4gi6nw0varz0oypboa6czz2s5gkwj5y0gn060uusekqk4y46y8s26svyznhdpl4z0gmolz1zacvwnm3qjifyi1uv2pmslnczkeztqyll2mfastutzrrvj3ei1mnhe9r2s9wgd4vug7xqk9ekgox1mktbt0bzb3w2qz2q8dqb4 == \t\p\x\8\r\c\y\2\c\i\y\c\l\r\0\b\8\7\z\c\p\n\8\8\p\r\0\g\b\5\h\q\7\h\o\9\f\s\1\y\k\7\p\p\o\e\8\z\o\k\x\g\8\x\p\f\i\3\m\y\o\0\8\5\n\0\i\h\0\1\8\9\x\h\i\9\f\0\8\b\k\9\3\r\z\d\1\z\n\7\5\o\r\r\c\s\q\6\u\b\t\7\2\a\x\j\d\y\i\1\7\9\d\n\c\w\2\1\p\v\o\7\j\t\l\q\2\c\4\1\g\6\0\x\z\0\s\9\j\g\u\q\8\b\d\1\e\1\0\z\r\l\4\q\p\h\t\u\e\b\k\s\v\i\7\9\l\v\z\t\k\4\3\y\7\u\5\l\k\y\h\k\m\9\a\x\y\f\g\c\e\q\c\v\l\w\3\2\9\2\h\2\0\9\2\7\o\n\b\1\6\n\r\w\u\4\i\f\w\0\t\e\2\6\8\g\o\d\9\g\h\h\h\w\j\k\6\d\6\p\w\l\1\x\5\z\o\8\2\h\v\v\s\b\e\q\s\i\r\8\s\h\u\8\c\q\n\y\5\3\v\p\s\z\n\5\g\g\t\1\i\o\u\h\u\s\z\k\x\j\d\u\0\k\3\t\b\j\s\w\r\z\g\8\x\h\3\d\6\q\s\9\r\c\z\o\3\s\d\i\5\7\r\u\l\3\i\h\a\c\v\y\l\2\h\u\7\o\u\m\p\s\z\4\g\i\6\n\w\0\v\a\r\z\0\o\y\p\b\o\a\6\c\z\z\2\s\5\g\k\w\j\5\y\0\g\n\0\6\0\u\u\s\e\k\q\k\4\y\4\6\y\8\s\2\6\s\v\y\z\n\h\d\p\l\4\z\0\g\m\o\l\z\1\z\a\c\v\w\n\m\3\q\j\i\f\y\i\1\u\v\2\p\m\s\l\n\c\z\k\e\z\t\q\y\l\l\2\m\f\a\s\t\u\t\z\r\r\v\j\3\e\i\1\m\n\h\e\9\r\2\s\9\w\g\d\4\v\u\g\7\x\q\k\9\e\k\g\o\x\1\m\k\t\b\t\0\b\z\b\3\w\2\q\z\2\q\8\d\q\b\4 ]] 00:05:38.870 00:05:38.870 real 0m3.501s 00:05:38.870 user 0m1.726s 00:05:38.870 sys 0m0.767s 00:05:38.870 ************************************ 00:05:38.870 END TEST dd_flags_misc_forced_aio 00:05:38.870 ************************************ 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:38.870 ************************************ 00:05:38.870 END TEST spdk_dd_posix 00:05:38.870 ************************************ 00:05:38.870 00:05:38.870 real 0m17.726s 00:05:38.870 user 0m7.896s 00:05:38.870 sys 0m5.298s 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.870 20:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:39.132 20:28:53 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:39.132 20:28:53 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.132 20:28:53 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.132 20:28:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:39.132 ************************************ 00:05:39.132 START TEST spdk_dd_malloc 00:05:39.132 ************************************ 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:39.132 * Looking for test storage... 00:05:39.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.132 --rc genhtml_branch_coverage=1 00:05:39.132 --rc genhtml_function_coverage=1 00:05:39.132 --rc genhtml_legend=1 00:05:39.132 --rc geninfo_all_blocks=1 00:05:39.132 --rc geninfo_unexecuted_blocks=1 00:05:39.132 00:05:39.132 ' 00:05:39.132 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.132 --rc genhtml_branch_coverage=1 00:05:39.132 --rc genhtml_function_coverage=1 00:05:39.132 --rc genhtml_legend=1 00:05:39.132 --rc geninfo_all_blocks=1 00:05:39.132 --rc geninfo_unexecuted_blocks=1 00:05:39.132 00:05:39.132 ' 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.133 --rc genhtml_branch_coverage=1 00:05:39.133 --rc genhtml_function_coverage=1 00:05:39.133 --rc genhtml_legend=1 00:05:39.133 --rc geninfo_all_blocks=1 00:05:39.133 --rc geninfo_unexecuted_blocks=1 00:05:39.133 00:05:39.133 ' 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.133 --rc genhtml_branch_coverage=1 00:05:39.133 --rc genhtml_function_coverage=1 00:05:39.133 --rc genhtml_legend=1 00:05:39.133 --rc geninfo_all_blocks=1 00:05:39.133 --rc geninfo_unexecuted_blocks=1 00:05:39.133 00:05:39.133 ' 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:39.133 ************************************ 00:05:39.133 START TEST dd_malloc_copy 00:05:39.133 ************************************ 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:39.133 20:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:39.133 [2024-11-26 20:28:53.641249] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:39.133 [2024-11-26 20:28:53.641318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:05:39.133 { 00:05:39.133 "subsystems": [ 00:05:39.133 { 00:05:39.133 "subsystem": "bdev", 00:05:39.133 "config": [ 00:05:39.133 { 00:05:39.133 "params": { 00:05:39.133 "block_size": 512, 00:05:39.133 "num_blocks": 1048576, 00:05:39.133 "name": "malloc0" 00:05:39.133 }, 00:05:39.133 "method": "bdev_malloc_create" 00:05:39.133 }, 00:05:39.133 { 00:05:39.133 "params": { 00:05:39.133 "block_size": 512, 00:05:39.133 "num_blocks": 1048576, 00:05:39.133 "name": "malloc1" 00:05:39.133 }, 00:05:39.133 "method": "bdev_malloc_create" 00:05:39.133 }, 00:05:39.133 { 00:05:39.133 "method": "bdev_wait_for_examine" 00:05:39.133 } 00:05:39.133 ] 00:05:39.133 } 00:05:39.133 ] 00:05:39.133 } 00:05:39.394 [2024-11-26 20:28:53.783768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.394 [2024-11-26 20:28:53.824013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.395 [2024-11-26 20:28:53.858121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.780  [2024-11-26T20:28:56.279Z] Copying: 206/512 [MB] (206 MBps) [2024-11-26T20:28:56.903Z] Copying: 413/512 [MB] (207 MBps) [2024-11-26T20:28:56.903Z] Copying: 512/512 [MB] (average 206 MBps) 00:05:42.348 00:05:42.348 20:28:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:05:42.348 20:28:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:05:42.348 20:28:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:42.348 20:28:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:42.610 [2024-11-26 20:28:56.922235] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:42.610 [2024-11-26 20:28:56.922301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60115 ] 00:05:42.610 { 00:05:42.610 "subsystems": [ 00:05:42.610 { 00:05:42.610 "subsystem": "bdev", 00:05:42.610 "config": [ 00:05:42.610 { 00:05:42.610 "params": { 00:05:42.610 "block_size": 512, 00:05:42.610 "num_blocks": 1048576, 00:05:42.610 "name": "malloc0" 00:05:42.610 }, 00:05:42.610 "method": "bdev_malloc_create" 00:05:42.610 }, 00:05:42.610 { 00:05:42.610 "params": { 00:05:42.610 "block_size": 512, 00:05:42.610 "num_blocks": 1048576, 00:05:42.610 "name": "malloc1" 00:05:42.610 }, 00:05:42.610 "method": "bdev_malloc_create" 00:05:42.610 }, 00:05:42.610 { 00:05:42.610 "method": "bdev_wait_for_examine" 00:05:42.610 } 00:05:42.610 ] 00:05:42.610 } 00:05:42.610 ] 00:05:42.610 } 00:05:42.610 [2024-11-26 20:28:57.061314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.610 [2024-11-26 20:28:57.101358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.610 [2024-11-26 20:28:57.137524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.998  [2024-11-26T20:28:59.494Z] Copying: 200/512 [MB] (200 MBps) [2024-11-26T20:29:00.067Z] Copying: 402/512 [MB] (201 MBps) [2024-11-26T20:29:00.642Z] Copying: 512/512 [MB] (average 201 MBps) 00:05:46.087 00:05:46.087 ************************************ 00:05:46.087 END TEST dd_malloc_copy 00:05:46.087 ************************************ 00:05:46.087 00:05:46.087 real 0m6.864s 00:05:46.087 user 0m6.110s 00:05:46.087 sys 0m0.558s 00:05:46.087 20:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.087 20:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:46.087 00:05:46.087 real 0m7.089s 00:05:46.087 user 0m6.216s 00:05:46.087 sys 0m0.658s 00:05:46.087 20:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.087 ************************************ 00:05:46.087 END TEST spdk_dd_malloc 00:05:46.087 ************************************ 00:05:46.087 20:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:46.087 20:29:00 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:05:46.087 20:29:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:46.087 20:29:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.087 20:29:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:46.087 ************************************ 00:05:46.087 START TEST spdk_dd_bdev_to_bdev 00:05:46.087 ************************************ 00:05:46.087 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:05:46.351 * Looking for test storage... 00:05:46.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.351 --rc genhtml_branch_coverage=1 00:05:46.351 --rc genhtml_function_coverage=1 00:05:46.351 --rc genhtml_legend=1 00:05:46.351 --rc geninfo_all_blocks=1 00:05:46.351 --rc geninfo_unexecuted_blocks=1 00:05:46.351 00:05:46.351 ' 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.351 --rc genhtml_branch_coverage=1 00:05:46.351 --rc genhtml_function_coverage=1 00:05:46.351 --rc genhtml_legend=1 00:05:46.351 --rc geninfo_all_blocks=1 00:05:46.351 --rc geninfo_unexecuted_blocks=1 00:05:46.351 00:05:46.351 ' 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.351 --rc genhtml_branch_coverage=1 00:05:46.351 --rc genhtml_function_coverage=1 00:05:46.351 --rc genhtml_legend=1 00:05:46.351 --rc geninfo_all_blocks=1 00:05:46.351 --rc geninfo_unexecuted_blocks=1 00:05:46.351 00:05:46.351 ' 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.351 --rc genhtml_branch_coverage=1 00:05:46.351 --rc genhtml_function_coverage=1 00:05:46.351 --rc genhtml_legend=1 00:05:46.351 --rc geninfo_all_blocks=1 00:05:46.351 --rc geninfo_unexecuted_blocks=1 00:05:46.351 00:05:46.351 ' 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:05:46.351 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:46.352 ************************************ 00:05:46.352 START TEST dd_inflate_file 00:05:46.352 ************************************ 00:05:46.352 20:29:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:05:46.352 [2024-11-26 20:29:00.810522] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:46.352 [2024-11-26 20:29:00.810660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:05:46.612 [2024-11-26 20:29:00.953943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.612 [2024-11-26 20:29:01.005934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.612 [2024-11-26 20:29:01.060054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.612  [2024-11-26T20:29:01.429Z] Copying: 64/64 [MB] (average 1523 MBps) 00:05:46.874 00:05:46.874 00:05:46.874 real 0m0.545s 00:05:46.874 user 0m0.302s 00:05:46.874 sys 0m0.293s 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:05:46.874 ************************************ 00:05:46.874 END TEST dd_inflate_file 00:05:46.874 ************************************ 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:46.874 ************************************ 00:05:46.874 START TEST dd_copy_to_out_bdev 00:05:46.874 ************************************ 00:05:46.874 20:29:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:05:46.874 { 00:05:46.874 "subsystems": [ 00:05:46.874 { 00:05:46.874 "subsystem": "bdev", 00:05:46.874 "config": [ 00:05:46.874 { 00:05:46.874 "params": { 00:05:46.874 "trtype": "pcie", 00:05:46.874 "traddr": "0000:00:10.0", 00:05:46.874 "name": "Nvme0" 00:05:46.874 }, 00:05:46.874 "method": "bdev_nvme_attach_controller" 00:05:46.874 }, 00:05:46.874 { 00:05:46.874 "params": { 00:05:46.874 "trtype": "pcie", 00:05:46.874 "traddr": "0000:00:11.0", 00:05:46.874 "name": "Nvme1" 00:05:46.874 }, 00:05:46.874 "method": "bdev_nvme_attach_controller" 00:05:46.874 }, 00:05:46.874 { 00:05:46.874 "method": "bdev_wait_for_examine" 00:05:46.874 } 00:05:46.874 ] 00:05:46.874 } 00:05:46.874 ] 00:05:46.874 } 00:05:46.874 [2024-11-26 20:29:01.426569] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:47.135 [2024-11-26 20:29:01.427345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60263 ] 00:05:47.135 [2024-11-26 20:29:01.570527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.135 [2024-11-26 20:29:01.629301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.135 [2024-11-26 20:29:01.686888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.523  [2024-11-26T20:29:04.021Z] Copying: 14/64 [MB] (14 MBps) [2024-11-26T20:29:04.966Z] Copying: 27/64 [MB] (13 MBps) [2024-11-26T20:29:05.909Z] Copying: 40/64 [MB] (13 MBps) [2024-11-26T20:29:06.482Z] Copying: 57/64 [MB] (16 MBps) [2024-11-26T20:29:06.744Z] Copying: 64/64 [MB] (average 14 MBps) 00:05:52.189 00:05:52.189 00:05:52.189 real 0m5.301s 00:05:52.189 user 0m4.965s 00:05:52.189 sys 0m4.987s 00:05:52.189 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.189 ************************************ 00:05:52.189 END TEST dd_copy_to_out_bdev 00:05:52.189 ************************************ 00:05:52.189 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:52.450 ************************************ 00:05:52.450 START TEST dd_offset_magic 00:05:52.450 ************************************ 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:52.450 20:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:52.450 [2024-11-26 20:29:06.795884] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:52.450 [2024-11-26 20:29:06.795975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:05:52.450 { 00:05:52.450 "subsystems": [ 00:05:52.450 { 00:05:52.450 "subsystem": "bdev", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "params": { 00:05:52.450 "trtype": "pcie", 00:05:52.450 "traddr": "0000:00:10.0", 00:05:52.450 "name": "Nvme0" 00:05:52.450 }, 00:05:52.450 "method": "bdev_nvme_attach_controller" 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "params": { 00:05:52.450 "trtype": "pcie", 00:05:52.450 "traddr": "0000:00:11.0", 00:05:52.450 "name": "Nvme1" 00:05:52.450 }, 00:05:52.450 "method": "bdev_nvme_attach_controller" 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "bdev_wait_for_examine" 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 } 00:05:52.450 [2024-11-26 20:29:06.934421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.450 [2024-11-26 20:29:06.994540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.710 [2024-11-26 20:29:07.057873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.971  [2024-11-26T20:29:07.786Z] Copying: 65/65 [MB] (average 698 MBps) 00:05:53.231 00:05:53.231 20:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:05:53.231 20:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:05:53.231 20:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:53.231 20:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:53.231 [2024-11-26 20:29:07.682781] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:53.231 [2024-11-26 20:29:07.682868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:05:53.231 { 00:05:53.231 "subsystems": [ 00:05:53.231 { 00:05:53.231 "subsystem": "bdev", 00:05:53.231 "config": [ 00:05:53.231 { 00:05:53.231 "params": { 00:05:53.231 "trtype": "pcie", 00:05:53.231 "traddr": "0000:00:10.0", 00:05:53.231 "name": "Nvme0" 00:05:53.231 }, 00:05:53.231 "method": "bdev_nvme_attach_controller" 00:05:53.231 }, 00:05:53.231 { 00:05:53.231 "params": { 00:05:53.231 "trtype": "pcie", 00:05:53.231 "traddr": "0000:00:11.0", 00:05:53.231 "name": "Nvme1" 00:05:53.231 }, 00:05:53.231 "method": "bdev_nvme_attach_controller" 00:05:53.231 }, 00:05:53.231 { 00:05:53.231 "method": "bdev_wait_for_examine" 00:05:53.231 } 00:05:53.231 ] 00:05:53.231 } 00:05:53.231 ] 00:05:53.231 } 00:05:53.493 [2024-11-26 20:29:07.825446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.493 [2024-11-26 20:29:07.878955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.493 [2024-11-26 20:29:07.935676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.754  [2024-11-26T20:29:08.309Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:53.754 00:05:53.754 20:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:05:53.754 20:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:05:53.754 20:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:05:53.754 20:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:05:53.754 20:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:05:53.754 20:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:53.754 20:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:54.017 [2024-11-26 20:29:08.337043] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:54.017 [2024-11-26 20:29:08.337134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60381 ] 00:05:54.017 { 00:05:54.017 "subsystems": [ 00:05:54.017 { 00:05:54.017 "subsystem": "bdev", 00:05:54.017 "config": [ 00:05:54.017 { 00:05:54.017 "params": { 00:05:54.017 "trtype": "pcie", 00:05:54.017 "traddr": "0000:00:10.0", 00:05:54.017 "name": "Nvme0" 00:05:54.017 }, 00:05:54.017 "method": "bdev_nvme_attach_controller" 00:05:54.017 }, 00:05:54.017 { 00:05:54.017 "params": { 00:05:54.017 "trtype": "pcie", 00:05:54.017 "traddr": "0000:00:11.0", 00:05:54.017 "name": "Nvme1" 00:05:54.017 }, 00:05:54.017 "method": "bdev_nvme_attach_controller" 00:05:54.017 }, 00:05:54.017 { 00:05:54.017 "method": "bdev_wait_for_examine" 00:05:54.017 } 00:05:54.017 ] 00:05:54.017 } 00:05:54.017 ] 00:05:54.017 } 00:05:54.017 [2024-11-26 20:29:08.479108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.017 [2024-11-26 20:29:08.532636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.278 [2024-11-26 20:29:08.586993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.540  [2024-11-26T20:29:09.359Z] Copying: 65/65 [MB] (average 698 MBps) 00:05:54.804 00:05:54.804 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:05:54.804 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:05:54.804 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:54.804 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:54.804 [2024-11-26 20:29:09.349932] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:54.804 [2024-11-26 20:29:09.350023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:05:55.065 { 00:05:55.065 "subsystems": [ 00:05:55.065 { 00:05:55.065 "subsystem": "bdev", 00:05:55.065 "config": [ 00:05:55.065 { 00:05:55.065 "params": { 00:05:55.065 "trtype": "pcie", 00:05:55.065 "traddr": "0000:00:10.0", 00:05:55.065 "name": "Nvme0" 00:05:55.065 }, 00:05:55.065 "method": "bdev_nvme_attach_controller" 00:05:55.065 }, 00:05:55.065 { 00:05:55.065 "params": { 00:05:55.065 "trtype": "pcie", 00:05:55.065 "traddr": "0000:00:11.0", 00:05:55.065 "name": "Nvme1" 00:05:55.065 }, 00:05:55.065 "method": "bdev_nvme_attach_controller" 00:05:55.065 }, 00:05:55.065 { 00:05:55.065 "method": "bdev_wait_for_examine" 00:05:55.065 } 00:05:55.065 ] 00:05:55.065 } 00:05:55.065 ] 00:05:55.065 } 00:05:55.065 [2024-11-26 20:29:09.483879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.065 [2024-11-26 20:29:09.540864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.065 [2024-11-26 20:29:09.592766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.325  [2024-11-26T20:29:10.141Z] Copying: 1024/1024 [kB] (average 333 MBps) 00:05:55.586 00:05:55.586 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:05:55.586 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:05:55.586 00:05:55.586 real 0m3.206s 00:05:55.586 user 0m2.268s 00:05:55.586 sys 0m0.926s 00:05:55.586 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.586 20:29:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:55.586 ************************************ 00:05:55.586 END TEST dd_offset_magic 00:05:55.586 ************************************ 00:05:55.586 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:05:55.586 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:05:55.586 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:55.586 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:05:55.587 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:05:55.587 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:05:55.587 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:05:55.587 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:05:55.587 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:05:55.587 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:55.587 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:55.587 [2024-11-26 20:29:10.069816] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:55.587 [2024-11-26 20:29:10.069910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:05:55.587 { 00:05:55.587 "subsystems": [ 00:05:55.587 { 00:05:55.587 "subsystem": "bdev", 00:05:55.587 "config": [ 00:05:55.587 { 00:05:55.587 "params": { 00:05:55.587 "trtype": "pcie", 00:05:55.587 "traddr": "0000:00:10.0", 00:05:55.587 "name": "Nvme0" 00:05:55.587 }, 00:05:55.587 "method": "bdev_nvme_attach_controller" 00:05:55.587 }, 00:05:55.587 { 00:05:55.587 "params": { 00:05:55.587 "trtype": "pcie", 00:05:55.587 "traddr": "0000:00:11.0", 00:05:55.587 "name": "Nvme1" 00:05:55.587 }, 00:05:55.587 "method": "bdev_nvme_attach_controller" 00:05:55.587 }, 00:05:55.587 { 00:05:55.587 "method": "bdev_wait_for_examine" 00:05:55.587 } 00:05:55.587 ] 00:05:55.587 } 00:05:55.587 ] 00:05:55.587 } 00:05:55.849 [2024-11-26 20:29:10.209460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.849 [2024-11-26 20:29:10.267802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.849 [2024-11-26 20:29:10.326266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.110  [2024-11-26T20:29:10.927Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:05:56.372 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:56.372 20:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:56.372 [2024-11-26 20:29:10.755784] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:56.372 [2024-11-26 20:29:10.755877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:05:56.372 { 00:05:56.372 "subsystems": [ 00:05:56.372 { 00:05:56.372 "subsystem": "bdev", 00:05:56.372 "config": [ 00:05:56.372 { 00:05:56.372 "params": { 00:05:56.372 "trtype": "pcie", 00:05:56.372 "traddr": "0000:00:10.0", 00:05:56.372 "name": "Nvme0" 00:05:56.372 }, 00:05:56.372 "method": "bdev_nvme_attach_controller" 00:05:56.372 }, 00:05:56.372 { 00:05:56.372 "params": { 00:05:56.372 "trtype": "pcie", 00:05:56.372 "traddr": "0000:00:11.0", 00:05:56.372 "name": "Nvme1" 00:05:56.372 }, 00:05:56.372 "method": "bdev_nvme_attach_controller" 00:05:56.372 }, 00:05:56.372 { 00:05:56.372 "method": "bdev_wait_for_examine" 00:05:56.372 } 00:05:56.372 ] 00:05:56.372 } 00:05:56.372 ] 00:05:56.372 } 00:05:56.372 [2024-11-26 20:29:10.895513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.633 [2024-11-26 20:29:10.949385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.633 [2024-11-26 20:29:11.005780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.895  [2024-11-26T20:29:11.450Z] Copying: 5120/5120 [kB] (average 555 MBps) 00:05:56.895 00:05:56.895 20:29:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:05:56.895 ************************************ 00:05:56.895 END TEST spdk_dd_bdev_to_bdev 00:05:56.895 ************************************ 00:05:56.895 00:05:56.895 real 0m10.828s 00:05:56.895 user 0m8.609s 00:05:56.895 sys 0m6.930s 00:05:56.895 20:29:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.895 20:29:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:57.158 20:29:11 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:05:57.158 20:29:11 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:05:57.158 20:29:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.158 20:29:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.158 20:29:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:57.158 ************************************ 00:05:57.158 START TEST spdk_dd_uring 00:05:57.158 ************************************ 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:05:57.158 * Looking for test storage... 00:05:57.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.158 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.159 --rc genhtml_branch_coverage=1 00:05:57.159 --rc genhtml_function_coverage=1 00:05:57.159 --rc genhtml_legend=1 00:05:57.159 --rc geninfo_all_blocks=1 00:05:57.159 --rc geninfo_unexecuted_blocks=1 00:05:57.159 00:05:57.159 ' 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.159 --rc genhtml_branch_coverage=1 00:05:57.159 --rc genhtml_function_coverage=1 00:05:57.159 --rc genhtml_legend=1 00:05:57.159 --rc geninfo_all_blocks=1 00:05:57.159 --rc geninfo_unexecuted_blocks=1 00:05:57.159 00:05:57.159 ' 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.159 --rc genhtml_branch_coverage=1 00:05:57.159 --rc genhtml_function_coverage=1 00:05:57.159 --rc genhtml_legend=1 00:05:57.159 --rc geninfo_all_blocks=1 00:05:57.159 --rc geninfo_unexecuted_blocks=1 00:05:57.159 00:05:57.159 ' 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.159 --rc genhtml_branch_coverage=1 00:05:57.159 --rc genhtml_function_coverage=1 00:05:57.159 --rc genhtml_legend=1 00:05:57.159 --rc geninfo_all_blocks=1 00:05:57.159 --rc geninfo_unexecuted_blocks=1 00:05:57.159 00:05:57.159 ' 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:05:57.159 ************************************ 00:05:57.159 START TEST dd_uring_copy 00:05:57.159 ************************************ 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=hxszppdj9ggk80giymeqh0iax5s5nzagj62nrgcepihldvprgb3zu95gvyzhubuwaj9fynk417lslt46igxm8237kunpadbwm4uvm8o8z6dt0gd2xozs8tq3i6goiktvexs23xyjn2mt8tpx2gmdguet0zjgmskukb2bnjmuyvbe0x3f0k4hi3dm88do7zymue14z53kls9ci8oxoqunxst7tgngmcbqhm76k4qkb4vn0wxr7e7duz2szdhlhusxeed3a5nharheznvdzyw61358aozkbo8ynzvzi8er3jo50xnttollh99iezslbaknutun8bqvta5h9hmmj6a7vwmap36xgc87zja3dv5oo4ainghv6vvs0qhk5kjfhnk27vo92tp7m3rz56ikktr5jz3qpjgxpprfvcej7xurc9oialdjtg3lt5i17wea28o0apf1h38dpp6uo2x8j362mb5ap16k9rs4zhl0c1eozuezt3zugqypmm9ptq3vdj8kefsllgfri8vyzhkuhuevt8pbo9i8rh21nk35x5hrm1fofsyhnli8wb4r5ofg0h1x2pr42a81ek0rjd8vamxza79uvn8aeadb6r6diac4jklbrmodkig68z58ayll2vwsdnucq0r4k21su7dvy7t3dqlud3e3j3b2gy0bu1upjeljf3lio82zu53jpgfazytlfsjiqyaoqjpohpkhtrdg8wfkxbsfrjcwvy5gdn6wdykiktb11w3u7cflkvwa04194vcmdib9u9jwcp2lpwr3j352rpgafn5a01inm0j4sagcmurnrbrtm89i16ahwx30qg3hn5b376seb4mzf6eb9q8t95r1ngnekx5kx0cyphhhm2wazki8qx6akmhh0751s1sm2plmij8a9v3bd2f49j37f5m317zqklei7a5ildfqf9ewx9bkx2eea9i48ivw3bk1tea22fur6cmqak5su8rky9df4tjomenrr0nn3ww9elvhnavwsni3t6eq1ufn 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo hxszppdj9ggk80giymeqh0iax5s5nzagj62nrgcepihldvprgb3zu95gvyzhubuwaj9fynk417lslt46igxm8237kunpadbwm4uvm8o8z6dt0gd2xozs8tq3i6goiktvexs23xyjn2mt8tpx2gmdguet0zjgmskukb2bnjmuyvbe0x3f0k4hi3dm88do7zymue14z53kls9ci8oxoqunxst7tgngmcbqhm76k4qkb4vn0wxr7e7duz2szdhlhusxeed3a5nharheznvdzyw61358aozkbo8ynzvzi8er3jo50xnttollh99iezslbaknutun8bqvta5h9hmmj6a7vwmap36xgc87zja3dv5oo4ainghv6vvs0qhk5kjfhnk27vo92tp7m3rz56ikktr5jz3qpjgxpprfvcej7xurc9oialdjtg3lt5i17wea28o0apf1h38dpp6uo2x8j362mb5ap16k9rs4zhl0c1eozuezt3zugqypmm9ptq3vdj8kefsllgfri8vyzhkuhuevt8pbo9i8rh21nk35x5hrm1fofsyhnli8wb4r5ofg0h1x2pr42a81ek0rjd8vamxza79uvn8aeadb6r6diac4jklbrmodkig68z58ayll2vwsdnucq0r4k21su7dvy7t3dqlud3e3j3b2gy0bu1upjeljf3lio82zu53jpgfazytlfsjiqyaoqjpohpkhtrdg8wfkxbsfrjcwvy5gdn6wdykiktb11w3u7cflkvwa04194vcmdib9u9jwcp2lpwr3j352rpgafn5a01inm0j4sagcmurnrbrtm89i16ahwx30qg3hn5b376seb4mzf6eb9q8t95r1ngnekx5kx0cyphhhm2wazki8qx6akmhh0751s1sm2plmij8a9v3bd2f49j37f5m317zqklei7a5ildfqf9ewx9bkx2eea9i48ivw3bk1tea22fur6cmqak5su8rky9df4tjomenrr0nn3ww9elvhnavwsni3t6eq1ufn 00:05:57.159 20:29:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:05:57.421 [2024-11-26 20:29:11.735764] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:57.421 [2024-11-26 20:29:11.735856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60526 ] 00:05:57.421 [2024-11-26 20:29:11.876009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.421 [2024-11-26 20:29:11.931941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.683 [2024-11-26 20:29:11.982802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.255  [2024-11-26T20:29:13.071Z] Copying: 511/511 [MB] (average 1741 MBps) 00:05:58.516 00:05:58.516 20:29:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:05:58.516 20:29:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:05:58.516 20:29:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:58.516 20:29:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:05:58.516 [2024-11-26 20:29:12.922446] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:05:58.516 [2024-11-26 20:29:12.922863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60542 ] 00:05:58.516 { 00:05:58.516 "subsystems": [ 00:05:58.516 { 00:05:58.516 "subsystem": "bdev", 00:05:58.516 "config": [ 00:05:58.516 { 00:05:58.516 "params": { 00:05:58.516 "block_size": 512, 00:05:58.516 "num_blocks": 1048576, 00:05:58.516 "name": "malloc0" 00:05:58.516 }, 00:05:58.516 "method": "bdev_malloc_create" 00:05:58.516 }, 00:05:58.516 { 00:05:58.516 "params": { 00:05:58.516 "filename": "/dev/zram1", 00:05:58.516 "name": "uring0" 00:05:58.516 }, 00:05:58.516 "method": "bdev_uring_create" 00:05:58.516 }, 00:05:58.516 { 00:05:58.516 "method": "bdev_wait_for_examine" 00:05:58.516 } 00:05:58.516 ] 00:05:58.516 } 00:05:58.516 ] 00:05:58.516 } 00:05:58.516 [2024-11-26 20:29:13.066440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.778 [2024-11-26 20:29:13.126364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.778 [2024-11-26 20:29:13.184744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.218  [2024-11-26T20:29:15.712Z] Copying: 250/512 [MB] (250 MBps) [2024-11-26T20:29:15.712Z] Copying: 472/512 [MB] (221 MBps) [2024-11-26T20:29:15.971Z] Copying: 512/512 [MB] (average 234 MBps) 00:06:01.416 00:06:01.416 20:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:01.416 20:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:01.416 20:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:01.416 20:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:01.416 [2024-11-26 20:29:15.914505] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:01.416 [2024-11-26 20:29:15.914750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 00:06:01.416 { 00:06:01.416 "subsystems": [ 00:06:01.416 { 00:06:01.416 "subsystem": "bdev", 00:06:01.416 "config": [ 00:06:01.416 { 00:06:01.416 "params": { 00:06:01.416 "block_size": 512, 00:06:01.416 "num_blocks": 1048576, 00:06:01.416 "name": "malloc0" 00:06:01.416 }, 00:06:01.416 "method": "bdev_malloc_create" 00:06:01.416 }, 00:06:01.416 { 00:06:01.416 "params": { 00:06:01.416 "filename": "/dev/zram1", 00:06:01.416 "name": "uring0" 00:06:01.416 }, 00:06:01.416 "method": "bdev_uring_create" 00:06:01.416 }, 00:06:01.416 { 00:06:01.416 "method": "bdev_wait_for_examine" 00:06:01.416 } 00:06:01.416 ] 00:06:01.416 } 00:06:01.416 ] 00:06:01.416 } 00:06:01.677 [2024-11-26 20:29:16.055680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.677 [2024-11-26 20:29:16.104614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.677 [2024-11-26 20:29:16.156173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.063  [2024-11-26T20:29:18.561Z] Copying: 160/512 [MB] (160 MBps) [2024-11-26T20:29:19.504Z] Copying: 353/512 [MB] (193 MBps) [2024-11-26T20:29:19.504Z] Copying: 508/512 [MB] (154 MBps) [2024-11-26T20:29:19.767Z] Copying: 512/512 [MB] (average 169 MBps) 00:06:05.212 00:06:05.212 20:29:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:05.212 20:29:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ hxszppdj9ggk80giymeqh0iax5s5nzagj62nrgcepihldvprgb3zu95gvyzhubuwaj9fynk417lslt46igxm8237kunpadbwm4uvm8o8z6dt0gd2xozs8tq3i6goiktvexs23xyjn2mt8tpx2gmdguet0zjgmskukb2bnjmuyvbe0x3f0k4hi3dm88do7zymue14z53kls9ci8oxoqunxst7tgngmcbqhm76k4qkb4vn0wxr7e7duz2szdhlhusxeed3a5nharheznvdzyw61358aozkbo8ynzvzi8er3jo50xnttollh99iezslbaknutun8bqvta5h9hmmj6a7vwmap36xgc87zja3dv5oo4ainghv6vvs0qhk5kjfhnk27vo92tp7m3rz56ikktr5jz3qpjgxpprfvcej7xurc9oialdjtg3lt5i17wea28o0apf1h38dpp6uo2x8j362mb5ap16k9rs4zhl0c1eozuezt3zugqypmm9ptq3vdj8kefsllgfri8vyzhkuhuevt8pbo9i8rh21nk35x5hrm1fofsyhnli8wb4r5ofg0h1x2pr42a81ek0rjd8vamxza79uvn8aeadb6r6diac4jklbrmodkig68z58ayll2vwsdnucq0r4k21su7dvy7t3dqlud3e3j3b2gy0bu1upjeljf3lio82zu53jpgfazytlfsjiqyaoqjpohpkhtrdg8wfkxbsfrjcwvy5gdn6wdykiktb11w3u7cflkvwa04194vcmdib9u9jwcp2lpwr3j352rpgafn5a01inm0j4sagcmurnrbrtm89i16ahwx30qg3hn5b376seb4mzf6eb9q8t95r1ngnekx5kx0cyphhhm2wazki8qx6akmhh0751s1sm2plmij8a9v3bd2f49j37f5m317zqklei7a5ildfqf9ewx9bkx2eea9i48ivw3bk1tea22fur6cmqak5su8rky9df4tjomenrr0nn3ww9elvhnavwsni3t6eq1ufn == \h\x\s\z\p\p\d\j\9\g\g\k\8\0\g\i\y\m\e\q\h\0\i\a\x\5\s\5\n\z\a\g\j\6\2\n\r\g\c\e\p\i\h\l\d\v\p\r\g\b\3\z\u\9\5\g\v\y\z\h\u\b\u\w\a\j\9\f\y\n\k\4\1\7\l\s\l\t\4\6\i\g\x\m\8\2\3\7\k\u\n\p\a\d\b\w\m\4\u\v\m\8\o\8\z\6\d\t\0\g\d\2\x\o\z\s\8\t\q\3\i\6\g\o\i\k\t\v\e\x\s\2\3\x\y\j\n\2\m\t\8\t\p\x\2\g\m\d\g\u\e\t\0\z\j\g\m\s\k\u\k\b\2\b\n\j\m\u\y\v\b\e\0\x\3\f\0\k\4\h\i\3\d\m\8\8\d\o\7\z\y\m\u\e\1\4\z\5\3\k\l\s\9\c\i\8\o\x\o\q\u\n\x\s\t\7\t\g\n\g\m\c\b\q\h\m\7\6\k\4\q\k\b\4\v\n\0\w\x\r\7\e\7\d\u\z\2\s\z\d\h\l\h\u\s\x\e\e\d\3\a\5\n\h\a\r\h\e\z\n\v\d\z\y\w\6\1\3\5\8\a\o\z\k\b\o\8\y\n\z\v\z\i\8\e\r\3\j\o\5\0\x\n\t\t\o\l\l\h\9\9\i\e\z\s\l\b\a\k\n\u\t\u\n\8\b\q\v\t\a\5\h\9\h\m\m\j\6\a\7\v\w\m\a\p\3\6\x\g\c\8\7\z\j\a\3\d\v\5\o\o\4\a\i\n\g\h\v\6\v\v\s\0\q\h\k\5\k\j\f\h\n\k\2\7\v\o\9\2\t\p\7\m\3\r\z\5\6\i\k\k\t\r\5\j\z\3\q\p\j\g\x\p\p\r\f\v\c\e\j\7\x\u\r\c\9\o\i\a\l\d\j\t\g\3\l\t\5\i\1\7\w\e\a\2\8\o\0\a\p\f\1\h\3\8\d\p\p\6\u\o\2\x\8\j\3\6\2\m\b\5\a\p\1\6\k\9\r\s\4\z\h\l\0\c\1\e\o\z\u\e\z\t\3\z\u\g\q\y\p\m\m\9\p\t\q\3\v\d\j\8\k\e\f\s\l\l\g\f\r\i\8\v\y\z\h\k\u\h\u\e\v\t\8\p\b\o\9\i\8\r\h\2\1\n\k\3\5\x\5\h\r\m\1\f\o\f\s\y\h\n\l\i\8\w\b\4\r\5\o\f\g\0\h\1\x\2\p\r\4\2\a\8\1\e\k\0\r\j\d\8\v\a\m\x\z\a\7\9\u\v\n\8\a\e\a\d\b\6\r\6\d\i\a\c\4\j\k\l\b\r\m\o\d\k\i\g\6\8\z\5\8\a\y\l\l\2\v\w\s\d\n\u\c\q\0\r\4\k\2\1\s\u\7\d\v\y\7\t\3\d\q\l\u\d\3\e\3\j\3\b\2\g\y\0\b\u\1\u\p\j\e\l\j\f\3\l\i\o\8\2\z\u\5\3\j\p\g\f\a\z\y\t\l\f\s\j\i\q\y\a\o\q\j\p\o\h\p\k\h\t\r\d\g\8\w\f\k\x\b\s\f\r\j\c\w\v\y\5\g\d\n\6\w\d\y\k\i\k\t\b\1\1\w\3\u\7\c\f\l\k\v\w\a\0\4\1\9\4\v\c\m\d\i\b\9\u\9\j\w\c\p\2\l\p\w\r\3\j\3\5\2\r\p\g\a\f\n\5\a\0\1\i\n\m\0\j\4\s\a\g\c\m\u\r\n\r\b\r\t\m\8\9\i\1\6\a\h\w\x\3\0\q\g\3\h\n\5\b\3\7\6\s\e\b\4\m\z\f\6\e\b\9\q\8\t\9\5\r\1\n\g\n\e\k\x\5\k\x\0\c\y\p\h\h\h\m\2\w\a\z\k\i\8\q\x\6\a\k\m\h\h\0\7\5\1\s\1\s\m\2\p\l\m\i\j\8\a\9\v\3\b\d\2\f\4\9\j\3\7\f\5\m\3\1\7\z\q\k\l\e\i\7\a\5\i\l\d\f\q\f\9\e\w\x\9\b\k\x\2\e\e\a\9\i\4\8\i\v\w\3\b\k\1\t\e\a\2\2\f\u\r\6\c\m\q\a\k\5\s\u\8\r\k\y\9\d\f\4\t\j\o\m\e\n\r\r\0\n\n\3\w\w\9\e\l\v\h\n\a\v\w\s\n\i\3\t\6\e\q\1\u\f\n ]] 00:06:05.212 20:29:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:05.213 20:29:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ hxszppdj9ggk80giymeqh0iax5s5nzagj62nrgcepihldvprgb3zu95gvyzhubuwaj9fynk417lslt46igxm8237kunpadbwm4uvm8o8z6dt0gd2xozs8tq3i6goiktvexs23xyjn2mt8tpx2gmdguet0zjgmskukb2bnjmuyvbe0x3f0k4hi3dm88do7zymue14z53kls9ci8oxoqunxst7tgngmcbqhm76k4qkb4vn0wxr7e7duz2szdhlhusxeed3a5nharheznvdzyw61358aozkbo8ynzvzi8er3jo50xnttollh99iezslbaknutun8bqvta5h9hmmj6a7vwmap36xgc87zja3dv5oo4ainghv6vvs0qhk5kjfhnk27vo92tp7m3rz56ikktr5jz3qpjgxpprfvcej7xurc9oialdjtg3lt5i17wea28o0apf1h38dpp6uo2x8j362mb5ap16k9rs4zhl0c1eozuezt3zugqypmm9ptq3vdj8kefsllgfri8vyzhkuhuevt8pbo9i8rh21nk35x5hrm1fofsyhnli8wb4r5ofg0h1x2pr42a81ek0rjd8vamxza79uvn8aeadb6r6diac4jklbrmodkig68z58ayll2vwsdnucq0r4k21su7dvy7t3dqlud3e3j3b2gy0bu1upjeljf3lio82zu53jpgfazytlfsjiqyaoqjpohpkhtrdg8wfkxbsfrjcwvy5gdn6wdykiktb11w3u7cflkvwa04194vcmdib9u9jwcp2lpwr3j352rpgafn5a01inm0j4sagcmurnrbrtm89i16ahwx30qg3hn5b376seb4mzf6eb9q8t95r1ngnekx5kx0cyphhhm2wazki8qx6akmhh0751s1sm2plmij8a9v3bd2f49j37f5m317zqklei7a5ildfqf9ewx9bkx2eea9i48ivw3bk1tea22fur6cmqak5su8rky9df4tjomenrr0nn3ww9elvhnavwsni3t6eq1ufn == \h\x\s\z\p\p\d\j\9\g\g\k\8\0\g\i\y\m\e\q\h\0\i\a\x\5\s\5\n\z\a\g\j\6\2\n\r\g\c\e\p\i\h\l\d\v\p\r\g\b\3\z\u\9\5\g\v\y\z\h\u\b\u\w\a\j\9\f\y\n\k\4\1\7\l\s\l\t\4\6\i\g\x\m\8\2\3\7\k\u\n\p\a\d\b\w\m\4\u\v\m\8\o\8\z\6\d\t\0\g\d\2\x\o\z\s\8\t\q\3\i\6\g\o\i\k\t\v\e\x\s\2\3\x\y\j\n\2\m\t\8\t\p\x\2\g\m\d\g\u\e\t\0\z\j\g\m\s\k\u\k\b\2\b\n\j\m\u\y\v\b\e\0\x\3\f\0\k\4\h\i\3\d\m\8\8\d\o\7\z\y\m\u\e\1\4\z\5\3\k\l\s\9\c\i\8\o\x\o\q\u\n\x\s\t\7\t\g\n\g\m\c\b\q\h\m\7\6\k\4\q\k\b\4\v\n\0\w\x\r\7\e\7\d\u\z\2\s\z\d\h\l\h\u\s\x\e\e\d\3\a\5\n\h\a\r\h\e\z\n\v\d\z\y\w\6\1\3\5\8\a\o\z\k\b\o\8\y\n\z\v\z\i\8\e\r\3\j\o\5\0\x\n\t\t\o\l\l\h\9\9\i\e\z\s\l\b\a\k\n\u\t\u\n\8\b\q\v\t\a\5\h\9\h\m\m\j\6\a\7\v\w\m\a\p\3\6\x\g\c\8\7\z\j\a\3\d\v\5\o\o\4\a\i\n\g\h\v\6\v\v\s\0\q\h\k\5\k\j\f\h\n\k\2\7\v\o\9\2\t\p\7\m\3\r\z\5\6\i\k\k\t\r\5\j\z\3\q\p\j\g\x\p\p\r\f\v\c\e\j\7\x\u\r\c\9\o\i\a\l\d\j\t\g\3\l\t\5\i\1\7\w\e\a\2\8\o\0\a\p\f\1\h\3\8\d\p\p\6\u\o\2\x\8\j\3\6\2\m\b\5\a\p\1\6\k\9\r\s\4\z\h\l\0\c\1\e\o\z\u\e\z\t\3\z\u\g\q\y\p\m\m\9\p\t\q\3\v\d\j\8\k\e\f\s\l\l\g\f\r\i\8\v\y\z\h\k\u\h\u\e\v\t\8\p\b\o\9\i\8\r\h\2\1\n\k\3\5\x\5\h\r\m\1\f\o\f\s\y\h\n\l\i\8\w\b\4\r\5\o\f\g\0\h\1\x\2\p\r\4\2\a\8\1\e\k\0\r\j\d\8\v\a\m\x\z\a\7\9\u\v\n\8\a\e\a\d\b\6\r\6\d\i\a\c\4\j\k\l\b\r\m\o\d\k\i\g\6\8\z\5\8\a\y\l\l\2\v\w\s\d\n\u\c\q\0\r\4\k\2\1\s\u\7\d\v\y\7\t\3\d\q\l\u\d\3\e\3\j\3\b\2\g\y\0\b\u\1\u\p\j\e\l\j\f\3\l\i\o\8\2\z\u\5\3\j\p\g\f\a\z\y\t\l\f\s\j\i\q\y\a\o\q\j\p\o\h\p\k\h\t\r\d\g\8\w\f\k\x\b\s\f\r\j\c\w\v\y\5\g\d\n\6\w\d\y\k\i\k\t\b\1\1\w\3\u\7\c\f\l\k\v\w\a\0\4\1\9\4\v\c\m\d\i\b\9\u\9\j\w\c\p\2\l\p\w\r\3\j\3\5\2\r\p\g\a\f\n\5\a\0\1\i\n\m\0\j\4\s\a\g\c\m\u\r\n\r\b\r\t\m\8\9\i\1\6\a\h\w\x\3\0\q\g\3\h\n\5\b\3\7\6\s\e\b\4\m\z\f\6\e\b\9\q\8\t\9\5\r\1\n\g\n\e\k\x\5\k\x\0\c\y\p\h\h\h\m\2\w\a\z\k\i\8\q\x\6\a\k\m\h\h\0\7\5\1\s\1\s\m\2\p\l\m\i\j\8\a\9\v\3\b\d\2\f\4\9\j\3\7\f\5\m\3\1\7\z\q\k\l\e\i\7\a\5\i\l\d\f\q\f\9\e\w\x\9\b\k\x\2\e\e\a\9\i\4\8\i\v\w\3\b\k\1\t\e\a\2\2\f\u\r\6\c\m\q\a\k\5\s\u\8\r\k\y\9\d\f\4\t\j\o\m\e\n\r\r\0\n\n\3\w\w\9\e\l\v\h\n\a\v\w\s\n\i\3\t\6\e\q\1\u\f\n ]] 00:06:05.213 20:29:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:05.803 20:29:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:05.803 20:29:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:05.803 20:29:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:05.803 20:29:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 [2024-11-26 20:29:20.089748] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:05.803 [2024-11-26 20:29:20.089870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:06:05.803 { 00:06:05.803 "subsystems": [ 00:06:05.803 { 00:06:05.803 "subsystem": "bdev", 00:06:05.803 "config": [ 00:06:05.803 { 00:06:05.803 "params": { 00:06:05.803 "block_size": 512, 00:06:05.804 "num_blocks": 1048576, 00:06:05.804 "name": "malloc0" 00:06:05.804 }, 00:06:05.804 "method": "bdev_malloc_create" 00:06:05.804 }, 00:06:05.804 { 00:06:05.804 "params": { 00:06:05.804 "filename": "/dev/zram1", 00:06:05.804 "name": "uring0" 00:06:05.804 }, 00:06:05.804 "method": "bdev_uring_create" 00:06:05.804 }, 00:06:05.804 { 00:06:05.804 "method": "bdev_wait_for_examine" 00:06:05.804 } 00:06:05.804 ] 00:06:05.804 } 00:06:05.804 ] 00:06:05.804 } 00:06:05.804 [2024-11-26 20:29:20.234433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.804 [2024-11-26 20:29:20.299162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.069 [2024-11-26 20:29:20.362427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.453  [2024-11-26T20:29:22.953Z] Copying: 175/512 [MB] (175 MBps) [2024-11-26T20:29:23.525Z] Copying: 356/512 [MB] (180 MBps) [2024-11-26T20:29:23.786Z] Copying: 512/512 [MB] (average 179 MBps) 00:06:09.231 00:06:09.231 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:09.231 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:09.231 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:09.232 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:09.232 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:09.232 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:09.232 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.232 20:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:09.232 [2024-11-26 20:29:23.707729] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:09.232 [2024-11-26 20:29:23.707800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:06:09.232 { 00:06:09.232 "subsystems": [ 00:06:09.232 { 00:06:09.232 "subsystem": "bdev", 00:06:09.232 "config": [ 00:06:09.232 { 00:06:09.232 "params": { 00:06:09.232 "block_size": 512, 00:06:09.232 "num_blocks": 1048576, 00:06:09.232 "name": "malloc0" 00:06:09.232 }, 00:06:09.232 "method": "bdev_malloc_create" 00:06:09.232 }, 00:06:09.232 { 00:06:09.232 "params": { 00:06:09.232 "filename": "/dev/zram1", 00:06:09.232 "name": "uring0" 00:06:09.232 }, 00:06:09.232 "method": "bdev_uring_create" 00:06:09.232 }, 00:06:09.232 { 00:06:09.232 "params": { 00:06:09.232 "name": "uring0" 00:06:09.232 }, 00:06:09.232 "method": "bdev_uring_delete" 00:06:09.232 }, 00:06:09.232 { 00:06:09.232 "method": "bdev_wait_for_examine" 00:06:09.232 } 00:06:09.232 ] 00:06:09.232 } 00:06:09.232 ] 00:06:09.232 } 00:06:09.494 [2024-11-26 20:29:23.849710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.494 [2024-11-26 20:29:23.913187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.494 [2024-11-26 20:29:23.960953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.755  [2024-11-26T20:29:24.572Z] Copying: 0/0 [B] (average 0 Bps) 00:06:10.017 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.017 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:10.017 [2024-11-26 20:29:24.374789] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:10.017 [2024-11-26 20:29:24.374849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60727 ] 00:06:10.017 { 00:06:10.017 "subsystems": [ 00:06:10.017 { 00:06:10.017 "subsystem": "bdev", 00:06:10.017 "config": [ 00:06:10.017 { 00:06:10.017 "params": { 00:06:10.017 "block_size": 512, 00:06:10.017 "num_blocks": 1048576, 00:06:10.017 "name": "malloc0" 00:06:10.017 }, 00:06:10.017 "method": "bdev_malloc_create" 00:06:10.017 }, 00:06:10.017 { 00:06:10.017 "params": { 00:06:10.017 "filename": "/dev/zram1", 00:06:10.017 "name": "uring0" 00:06:10.017 }, 00:06:10.017 "method": "bdev_uring_create" 00:06:10.017 }, 00:06:10.017 { 00:06:10.017 "params": { 00:06:10.017 "name": "uring0" 00:06:10.017 }, 00:06:10.017 "method": "bdev_uring_delete" 00:06:10.017 }, 00:06:10.017 { 00:06:10.017 "method": "bdev_wait_for_examine" 00:06:10.017 } 00:06:10.017 ] 00:06:10.017 } 00:06:10.017 ] 00:06:10.017 } 00:06:10.017 [2024-11-26 20:29:24.515047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.017 [2024-11-26 20:29:24.553960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.279 [2024-11-26 20:29:24.588856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.279 [2024-11-26 20:29:24.754522] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:10.279 [2024-11-26 20:29:24.754816] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:10.279 [2024-11-26 20:29:24.754856] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:10.279 [2024-11-26 20:29:24.755128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.538 [2024-11-26 20:29:24.945597] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:10.538 20:29:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:10.538 20:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:10.538 20:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:10.797 00:06:10.797 ************************************ 00:06:10.797 END TEST dd_uring_copy 00:06:10.797 ************************************ 00:06:10.797 real 0m13.526s 00:06:10.797 user 0m9.075s 00:06:10.797 sys 0m11.642s 00:06:10.797 20:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.797 20:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:10.797 00:06:10.797 real 0m13.744s 00:06:10.797 user 0m9.177s 00:06:10.797 sys 0m11.753s 00:06:10.797 20:29:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.797 ************************************ 00:06:10.797 END TEST spdk_dd_uring 00:06:10.797 ************************************ 00:06:10.797 20:29:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:10.797 20:29:25 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:10.797 20:29:25 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.797 20:29:25 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.797 20:29:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:10.797 ************************************ 00:06:10.797 START TEST spdk_dd_sparse 00:06:10.797 ************************************ 00:06:10.797 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:10.797 * Looking for test storage... 00:06:11.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.056 --rc genhtml_branch_coverage=1 00:06:11.056 --rc genhtml_function_coverage=1 00:06:11.056 --rc genhtml_legend=1 00:06:11.056 --rc geninfo_all_blocks=1 00:06:11.056 --rc geninfo_unexecuted_blocks=1 00:06:11.056 00:06:11.056 ' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.056 --rc genhtml_branch_coverage=1 00:06:11.056 --rc genhtml_function_coverage=1 00:06:11.056 --rc genhtml_legend=1 00:06:11.056 --rc geninfo_all_blocks=1 00:06:11.056 --rc geninfo_unexecuted_blocks=1 00:06:11.056 00:06:11.056 ' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.056 --rc genhtml_branch_coverage=1 00:06:11.056 --rc genhtml_function_coverage=1 00:06:11.056 --rc genhtml_legend=1 00:06:11.056 --rc geninfo_all_blocks=1 00:06:11.056 --rc geninfo_unexecuted_blocks=1 00:06:11.056 00:06:11.056 ' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.056 --rc genhtml_branch_coverage=1 00:06:11.056 --rc genhtml_function_coverage=1 00:06:11.056 --rc genhtml_legend=1 00:06:11.056 --rc geninfo_all_blocks=1 00:06:11.056 --rc geninfo_unexecuted_blocks=1 00:06:11.056 00:06:11.056 ' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:11.056 1+0 records in 00:06:11.056 1+0 records out 00:06:11.056 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00708184 s, 592 MB/s 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:11.056 1+0 records in 00:06:11.056 1+0 records out 00:06:11.056 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00621014 s, 675 MB/s 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:11.056 1+0 records in 00:06:11.056 1+0 records out 00:06:11.056 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00783963 s, 535 MB/s 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 ************************************ 00:06:11.056 START TEST dd_sparse_file_to_file 00:06:11.056 ************************************ 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:11.056 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:11.057 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:11.057 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:11.057 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:11.057 [2024-11-26 20:29:25.522798] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:11.057 [2024-11-26 20:29:25.523016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60826 ] 00:06:11.057 { 00:06:11.057 "subsystems": [ 00:06:11.057 { 00:06:11.057 "subsystem": "bdev", 00:06:11.057 "config": [ 00:06:11.057 { 00:06:11.057 "params": { 00:06:11.057 "block_size": 4096, 00:06:11.057 "filename": "dd_sparse_aio_disk", 00:06:11.057 "name": "dd_aio" 00:06:11.057 }, 00:06:11.057 "method": "bdev_aio_create" 00:06:11.057 }, 00:06:11.057 { 00:06:11.057 "params": { 00:06:11.057 "lvs_name": "dd_lvstore", 00:06:11.057 "bdev_name": "dd_aio" 00:06:11.057 }, 00:06:11.057 "method": "bdev_lvol_create_lvstore" 00:06:11.057 }, 00:06:11.057 { 00:06:11.057 "method": "bdev_wait_for_examine" 00:06:11.057 } 00:06:11.057 ] 00:06:11.057 } 00:06:11.057 ] 00:06:11.057 } 00:06:11.314 [2024-11-26 20:29:25.666298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.314 [2024-11-26 20:29:25.706939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.314 [2024-11-26 20:29:25.742173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.314  [2024-11-26T20:29:26.128Z] Copying: 12/36 [MB] (average 1500 MBps) 00:06:11.573 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:11.573 ************************************ 00:06:11.573 END TEST dd_sparse_file_to_file 00:06:11.573 ************************************ 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:11.573 00:06:11.573 real 0m0.501s 00:06:11.573 user 0m0.291s 00:06:11.573 sys 0m0.244s 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.573 20:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:11.573 ************************************ 00:06:11.573 START TEST dd_sparse_file_to_bdev 00:06:11.573 ************************************ 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:11.573 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:11.573 [2024-11-26 20:29:26.071637] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:11.573 [2024-11-26 20:29:26.071759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60874 ] 00:06:11.573 { 00:06:11.573 "subsystems": [ 00:06:11.573 { 00:06:11.573 "subsystem": "bdev", 00:06:11.573 "config": [ 00:06:11.573 { 00:06:11.573 "params": { 00:06:11.573 "block_size": 4096, 00:06:11.573 "filename": "dd_sparse_aio_disk", 00:06:11.573 "name": "dd_aio" 00:06:11.573 }, 00:06:11.573 "method": "bdev_aio_create" 00:06:11.573 }, 00:06:11.573 { 00:06:11.573 "params": { 00:06:11.573 "lvs_name": "dd_lvstore", 00:06:11.573 "lvol_name": "dd_lvol", 00:06:11.573 "size_in_mib": 36, 00:06:11.573 "thin_provision": true 00:06:11.573 }, 00:06:11.573 "method": "bdev_lvol_create" 00:06:11.573 }, 00:06:11.573 { 00:06:11.573 "method": "bdev_wait_for_examine" 00:06:11.573 } 00:06:11.573 ] 00:06:11.573 } 00:06:11.573 ] 00:06:11.573 } 00:06:11.831 [2024-11-26 20:29:26.210403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.831 [2024-11-26 20:29:26.250279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.831 [2024-11-26 20:29:26.284871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.831  [2024-11-26T20:29:26.646Z] Copying: 12/36 [MB] (average 444 MBps) 00:06:12.091 00:06:12.091 00:06:12.091 real 0m0.472s 00:06:12.091 user 0m0.284s 00:06:12.091 sys 0m0.231s 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:12.091 ************************************ 00:06:12.091 END TEST dd_sparse_file_to_bdev 00:06:12.091 ************************************ 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:12.091 ************************************ 00:06:12.091 START TEST dd_sparse_bdev_to_file 00:06:12.091 ************************************ 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:12.091 20:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:12.091 { 00:06:12.091 "subsystems": [ 00:06:12.091 { 00:06:12.091 "subsystem": "bdev", 00:06:12.091 "config": [ 00:06:12.091 { 00:06:12.091 "params": { 00:06:12.091 "block_size": 4096, 00:06:12.091 "filename": "dd_sparse_aio_disk", 00:06:12.091 "name": "dd_aio" 00:06:12.091 }, 00:06:12.091 "method": "bdev_aio_create" 00:06:12.091 }, 00:06:12.092 { 00:06:12.092 "method": "bdev_wait_for_examine" 00:06:12.092 } 00:06:12.092 ] 00:06:12.092 } 00:06:12.092 ] 00:06:12.092 } 00:06:12.092 [2024-11-26 20:29:26.610342] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:12.092 [2024-11-26 20:29:26.610553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60907 ] 00:06:12.351 [2024-11-26 20:29:26.754892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.351 [2024-11-26 20:29:26.809431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.351 [2024-11-26 20:29:26.871362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.659  [2024-11-26T20:29:27.214Z] Copying: 12/36 [MB] (average 1000 MBps) 00:06:12.659 00:06:12.659 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:12.659 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:12.659 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:12.921 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:12.921 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:12.921 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:12.921 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:12.921 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:12.922 ************************************ 00:06:12.922 END TEST dd_sparse_bdev_to_file 00:06:12.922 ************************************ 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:12.922 00:06:12.922 real 0m0.638s 00:06:12.922 user 0m0.351s 00:06:12.922 sys 0m0.372s 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:12.922 ************************************ 00:06:12.922 END TEST spdk_dd_sparse 00:06:12.922 ************************************ 00:06:12.922 00:06:12.922 real 0m2.002s 00:06:12.922 user 0m1.075s 00:06:12.922 sys 0m1.036s 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.922 20:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:12.922 20:29:27 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:12.922 20:29:27 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.922 20:29:27 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.922 20:29:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:12.922 ************************************ 00:06:12.922 START TEST spdk_dd_negative 00:06:12.922 ************************************ 00:06:12.922 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:12.922 * Looking for test storage... 00:06:12.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.922 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.922 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.922 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.184 --rc genhtml_branch_coverage=1 00:06:13.184 --rc genhtml_function_coverage=1 00:06:13.184 --rc genhtml_legend=1 00:06:13.184 --rc geninfo_all_blocks=1 00:06:13.184 --rc geninfo_unexecuted_blocks=1 00:06:13.184 00:06:13.184 ' 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.184 --rc genhtml_branch_coverage=1 00:06:13.184 --rc genhtml_function_coverage=1 00:06:13.184 --rc genhtml_legend=1 00:06:13.184 --rc geninfo_all_blocks=1 00:06:13.184 --rc geninfo_unexecuted_blocks=1 00:06:13.184 00:06:13.184 ' 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.184 --rc genhtml_branch_coverage=1 00:06:13.184 --rc genhtml_function_coverage=1 00:06:13.184 --rc genhtml_legend=1 00:06:13.184 --rc geninfo_all_blocks=1 00:06:13.184 --rc geninfo_unexecuted_blocks=1 00:06:13.184 00:06:13.184 ' 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.184 --rc genhtml_branch_coverage=1 00:06:13.184 --rc genhtml_function_coverage=1 00:06:13.184 --rc genhtml_legend=1 00:06:13.184 --rc geninfo_all_blocks=1 00:06:13.184 --rc geninfo_unexecuted_blocks=1 00:06:13.184 00:06:13.184 ' 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.184 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.185 ************************************ 00:06:13.185 START TEST dd_invalid_arguments 00:06:13.185 ************************************ 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.185 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:13.185 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:13.185 00:06:13.185 CPU options: 00:06:13.185 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:13.185 (like [0,1,10]) 00:06:13.185 --lcores lcore to CPU mapping list. The list is in the format: 00:06:13.185 [<,lcores[@CPUs]>...] 00:06:13.185 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:13.185 Within the group, '-' is used for range separator, 00:06:13.185 ',' is used for single number separator. 00:06:13.185 '( )' can be omitted for single element group, 00:06:13.185 '@' can be omitted if cpus and lcores have the same value 00:06:13.185 --disable-cpumask-locks Disable CPU core lock files. 00:06:13.185 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:13.185 pollers in the app support interrupt mode) 00:06:13.185 -p, --main-core main (primary) core for DPDK 00:06:13.185 00:06:13.185 Configuration options: 00:06:13.185 -c, --config, --json JSON config file 00:06:13.185 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:13.185 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:13.185 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:13.185 --rpcs-allowed comma-separated list of permitted RPCS 00:06:13.185 --json-ignore-init-errors don't exit on invalid config entry 00:06:13.185 00:06:13.185 Memory options: 00:06:13.185 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:13.185 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:13.185 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:13.185 -R, --huge-unlink unlink huge files after initialization 00:06:13.185 -n, --mem-channels number of memory channels used for DPDK 00:06:13.185 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:13.185 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:13.185 --no-huge run without using hugepages 00:06:13.185 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:13.185 -i, --shm-id shared memory ID (optional) 00:06:13.185 -g, --single-file-segments force creating just one hugetlbfs file 00:06:13.185 00:06:13.185 PCI options: 00:06:13.185 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:13.185 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:13.185 -u, --no-pci disable PCI access 00:06:13.185 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:13.185 00:06:13.185 Log options: 00:06:13.185 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:13.185 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:13.185 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:13.185 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:13.185 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:13.185 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:13.185 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:13.185 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:13.185 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:13.185 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:13.185 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:13.185 --silence-noticelog disable notice level logging to stderr 00:06:13.185 00:06:13.185 Trace options: 00:06:13.185 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:13.185 setting 0 to disable trace (default 32768) 00:06:13.185 Tracepoints vary in size and can use more than one trace entry. 00:06:13.185 -e, --tpoint-group [:] 00:06:13.185 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:13.185 [2024-11-26 20:29:27.584800] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:13.185 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:13.185 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:13.185 bdev_raid, scheduler, all). 00:06:13.185 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:13.185 a tracepoint group. First tpoint inside a group can be enabled by 00:06:13.185 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:13.185 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:13.185 in /include/spdk_internal/trace_defs.h 00:06:13.185 00:06:13.185 Other options: 00:06:13.185 -h, --help show this usage 00:06:13.185 -v, --version print SPDK version 00:06:13.185 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:13.185 --env-context Opaque context for use of the env implementation 00:06:13.185 00:06:13.185 Application specific: 00:06:13.185 [--------- DD Options ---------] 00:06:13.185 --if Input file. Must specify either --if or --ib. 00:06:13.185 --ib Input bdev. Must specifier either --if or --ib 00:06:13.185 --of Output file. Must specify either --of or --ob. 00:06:13.185 --ob Output bdev. Must specify either --of or --ob. 00:06:13.185 --iflag Input file flags. 00:06:13.185 --oflag Output file flags. 00:06:13.185 --bs I/O unit size (default: 4096) 00:06:13.185 --qd Queue depth (default: 2) 00:06:13.185 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:13.186 --skip Skip this many I/O units at start of input. (default: 0) 00:06:13.186 --seek Skip this many I/O units at start of output. (default: 0) 00:06:13.186 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:13.186 --sparse Enable hole skipping in input target 00:06:13.186 Available iflag and oflag values: 00:06:13.186 append - append mode 00:06:13.186 direct - use direct I/O for data 00:06:13.186 directory - fail unless a directory 00:06:13.186 dsync - use synchronized I/O for data 00:06:13.186 noatime - do not update access time 00:06:13.186 noctty - do not assign controlling terminal from file 00:06:13.186 nofollow - do not follow symlinks 00:06:13.186 nonblock - use non-blocking I/O 00:06:13.186 sync - use synchronized I/O for data and metadata 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.186 00:06:13.186 real 0m0.055s 00:06:13.186 user 0m0.033s 00:06:13.186 sys 0m0.019s 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:13.186 ************************************ 00:06:13.186 END TEST dd_invalid_arguments 00:06:13.186 ************************************ 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.186 ************************************ 00:06:13.186 START TEST dd_double_input 00:06:13.186 ************************************ 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:13.186 [2024-11-26 20:29:27.691745] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.186 00:06:13.186 real 0m0.047s 00:06:13.186 user 0m0.022s 00:06:13.186 sys 0m0.023s 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.186 20:29:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:13.186 ************************************ 00:06:13.186 END TEST dd_double_input 00:06:13.186 ************************************ 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.447 ************************************ 00:06:13.447 START TEST dd_double_output 00:06:13.447 ************************************ 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:13.447 [2024-11-26 20:29:27.800783] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:13.447 ************************************ 00:06:13.447 END TEST dd_double_output 00:06:13.447 ************************************ 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.447 00:06:13.447 real 0m0.065s 00:06:13.447 user 0m0.037s 00:06:13.447 sys 0m0.026s 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.447 ************************************ 00:06:13.447 START TEST dd_no_input 00:06:13.447 ************************************ 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:13.447 [2024-11-26 20:29:27.908180] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.447 00:06:13.447 real 0m0.047s 00:06:13.447 user 0m0.028s 00:06:13.447 sys 0m0.019s 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.447 ************************************ 00:06:13.447 END TEST dd_no_input 00:06:13.447 ************************************ 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.447 20:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.447 ************************************ 00:06:13.447 START TEST dd_no_output 00:06:13.448 ************************************ 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.448 20:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.708 [2024-11-26 20:29:28.005191] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.708 00:06:13.708 real 0m0.052s 00:06:13.708 user 0m0.036s 00:06:13.708 sys 0m0.014s 00:06:13.708 ************************************ 00:06:13.708 END TEST dd_no_output 00:06:13.708 ************************************ 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 ************************************ 00:06:13.708 START TEST dd_wrong_blocksize 00:06:13.708 ************************************ 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:13.708 [2024-11-26 20:29:28.101158] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.708 00:06:13.708 real 0m0.051s 00:06:13.708 user 0m0.032s 00:06:13.708 sys 0m0.018s 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.708 ************************************ 00:06:13.708 END TEST dd_wrong_blocksize 00:06:13.708 ************************************ 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 ************************************ 00:06:13.708 START TEST dd_smaller_blocksize 00:06:13.708 ************************************ 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.708 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.709 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.709 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.709 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.709 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.709 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.709 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:13.709 [2024-11-26 20:29:28.199719] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:13.709 [2024-11-26 20:29:28.199914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:06:13.968 [2024-11-26 20:29:28.339756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.968 [2024-11-26 20:29:28.380944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.968 [2024-11-26 20:29:28.416016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.225 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:14.483 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:14.484 [2024-11-26 20:29:28.860330] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:14.484 [2024-11-26 20:29:28.860402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.484 [2024-11-26 20:29:28.931126] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.484 00:06:14.484 real 0m0.820s 00:06:14.484 user 0m0.252s 00:06:14.484 sys 0m0.461s 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.484 20:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:14.484 ************************************ 00:06:14.484 END TEST dd_smaller_blocksize 00:06:14.484 ************************************ 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.484 ************************************ 00:06:14.484 START TEST dd_invalid_count 00:06:14.484 ************************************ 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.484 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:14.745 [2024-11-26 20:29:29.068855] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.745 00:06:14.745 real 0m0.061s 00:06:14.745 user 0m0.037s 00:06:14.745 sys 0m0.020s 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.745 ************************************ 00:06:14.745 END TEST dd_invalid_count 00:06:14.745 ************************************ 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.745 ************************************ 00:06:14.745 START TEST dd_invalid_oflag 00:06:14.745 ************************************ 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:14.745 [2024-11-26 20:29:29.184923] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.745 00:06:14.745 real 0m0.055s 00:06:14.745 user 0m0.029s 00:06:14.745 sys 0m0.025s 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.745 ************************************ 00:06:14.745 END TEST dd_invalid_oflag 00:06:14.745 ************************************ 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.745 ************************************ 00:06:14.745 START TEST dd_invalid_iflag 00:06:14.745 ************************************ 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:14.745 [2024-11-26 20:29:29.284079] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.745 00:06:14.745 real 0m0.052s 00:06:14.745 user 0m0.027s 00:06:14.745 sys 0m0.024s 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.745 ************************************ 00:06:14.745 END TEST dd_invalid_iflag 00:06:14.745 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:14.745 ************************************ 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:15.006 ************************************ 00:06:15.006 START TEST dd_unknown_flag 00:06:15.006 ************************************ 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.006 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:15.006 [2024-11-26 20:29:29.384998] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:15.006 [2024-11-26 20:29:29.385074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 00:06:15.006 [2024-11-26 20:29:29.526506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.268 [2024-11-26 20:29:29.571636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.268 [2024-11-26 20:29:29.614122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.268 [2024-11-26 20:29:29.646704] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:15.268 [2024-11-26 20:29:29.646751] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.268 [2024-11-26 20:29:29.646806] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:15.268 [2024-11-26 20:29:29.646815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.268 [2024-11-26 20:29:29.647000] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:15.268 [2024-11-26 20:29:29.647012] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.268 [2024-11-26 20:29:29.647063] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:15.268 [2024-11-26 20:29:29.647069] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:15.268 [2024-11-26 20:29:29.731559] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.268 ************************************ 00:06:15.268 END TEST dd_unknown_flag 00:06:15.268 ************************************ 00:06:15.268 00:06:15.268 real 0m0.441s 00:06:15.268 user 0m0.225s 00:06:15.268 sys 0m0.125s 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.268 20:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:15.529 ************************************ 00:06:15.529 START TEST dd_invalid_json 00:06:15.529 ************************************ 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.529 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.530 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.530 20:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:15.530 [2024-11-26 20:29:29.874210] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:15.530 [2024-11-26 20:29:29.874279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:06:15.530 [2024-11-26 20:29:30.012745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.530 [2024-11-26 20:29:30.057625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.530 [2024-11-26 20:29:30.057700] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:15.530 [2024-11-26 20:29:30.057716] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:15.530 [2024-11-26 20:29:30.057726] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.530 [2024-11-26 20:29:30.057777] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.791 00:06:15.791 real 0m0.275s 00:06:15.791 user 0m0.125s 00:06:15.791 sys 0m0.049s 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.791 ************************************ 00:06:15.791 END TEST dd_invalid_json 00:06:15.791 ************************************ 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:15.791 ************************************ 00:06:15.791 START TEST dd_invalid_seek 00:06:15.791 ************************************ 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.791 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:15.791 { 00:06:15.791 "subsystems": [ 00:06:15.791 { 00:06:15.791 "subsystem": "bdev", 00:06:15.791 "config": [ 00:06:15.791 { 00:06:15.791 "params": { 00:06:15.791 "block_size": 512, 00:06:15.791 "num_blocks": 512, 00:06:15.791 "name": "malloc0" 00:06:15.791 }, 00:06:15.791 "method": "bdev_malloc_create" 00:06:15.791 }, 00:06:15.791 { 00:06:15.791 "params": { 00:06:15.791 "block_size": 512, 00:06:15.791 "num_blocks": 512, 00:06:15.791 "name": "malloc1" 00:06:15.791 }, 00:06:15.791 "method": "bdev_malloc_create" 00:06:15.791 }, 00:06:15.791 { 00:06:15.791 "method": "bdev_wait_for_examine" 00:06:15.791 } 00:06:15.791 ] 00:06:15.791 } 00:06:15.791 ] 00:06:15.791 } 00:06:15.791 [2024-11-26 20:29:30.192142] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:15.791 [2024-11-26 20:29:30.192208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61283 ] 00:06:15.791 [2024-11-26 20:29:30.330605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.052 [2024-11-26 20:29:30.373804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.052 [2024-11-26 20:29:30.417267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.052 [2024-11-26 20:29:30.477874] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:16.052 [2024-11-26 20:29:30.477948] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.052 [2024-11-26 20:29:30.561529] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.314 00:06:16.314 real 0m0.459s 00:06:16.314 user 0m0.269s 00:06:16.314 sys 0m0.127s 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.314 ************************************ 00:06:16.314 END TEST dd_invalid_seek 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:16.314 ************************************ 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.314 ************************************ 00:06:16.314 START TEST dd_invalid_skip 00:06:16.314 ************************************ 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.314 20:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:16.314 { 00:06:16.314 "subsystems": [ 00:06:16.314 { 00:06:16.314 "subsystem": "bdev", 00:06:16.314 "config": [ 00:06:16.314 { 00:06:16.314 "params": { 00:06:16.314 "block_size": 512, 00:06:16.314 "num_blocks": 512, 00:06:16.314 "name": "malloc0" 00:06:16.314 }, 00:06:16.314 "method": "bdev_malloc_create" 00:06:16.314 }, 00:06:16.314 { 00:06:16.314 "params": { 00:06:16.314 "block_size": 512, 00:06:16.314 "num_blocks": 512, 00:06:16.314 "name": "malloc1" 00:06:16.314 }, 00:06:16.315 "method": "bdev_malloc_create" 00:06:16.315 }, 00:06:16.315 { 00:06:16.315 "method": "bdev_wait_for_examine" 00:06:16.315 } 00:06:16.315 ] 00:06:16.315 } 00:06:16.315 ] 00:06:16.315 } 00:06:16.315 [2024-11-26 20:29:30.696803] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:16.315 [2024-11-26 20:29:30.696859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61317 ] 00:06:16.315 [2024-11-26 20:29:30.834829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.576 [2024-11-26 20:29:30.876995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.576 [2024-11-26 20:29:30.918512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.576 [2024-11-26 20:29:30.977470] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:16.576 [2024-11-26 20:29:30.977528] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.576 [2024-11-26 20:29:31.061934] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.576 00:06:16.576 real 0m0.455s 00:06:16.576 user 0m0.274s 00:06:16.576 sys 0m0.119s 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.576 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:16.576 ************************************ 00:06:16.576 END TEST dd_invalid_skip 00:06:16.576 ************************************ 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.839 ************************************ 00:06:16.839 START TEST dd_invalid_input_count 00:06:16.839 ************************************ 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.839 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:16.839 { 00:06:16.839 "subsystems": [ 00:06:16.839 { 00:06:16.839 "subsystem": "bdev", 00:06:16.839 "config": [ 00:06:16.839 { 00:06:16.839 "params": { 00:06:16.839 "block_size": 512, 00:06:16.839 "num_blocks": 512, 00:06:16.839 "name": "malloc0" 00:06:16.839 }, 00:06:16.839 "method": "bdev_malloc_create" 00:06:16.839 }, 00:06:16.839 { 00:06:16.839 "params": { 00:06:16.839 "block_size": 512, 00:06:16.839 "num_blocks": 512, 00:06:16.839 "name": "malloc1" 00:06:16.839 }, 00:06:16.839 "method": "bdev_malloc_create" 00:06:16.839 }, 00:06:16.839 { 00:06:16.839 "method": "bdev_wait_for_examine" 00:06:16.839 } 00:06:16.839 ] 00:06:16.839 } 00:06:16.839 ] 00:06:16.839 } 00:06:16.839 [2024-11-26 20:29:31.195896] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:16.839 [2024-11-26 20:29:31.195980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61350 ] 00:06:16.839 [2024-11-26 20:29:31.328153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.839 [2024-11-26 20:29:31.371537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.101 [2024-11-26 20:29:31.413803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.101 [2024-11-26 20:29:31.473314] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:17.101 [2024-11-26 20:29:31.473374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.101 [2024-11-26 20:29:31.557738] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.101 00:06:17.101 real 0m0.456s 00:06:17.101 user 0m0.267s 00:06:17.101 sys 0m0.124s 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:17.101 ************************************ 00:06:17.101 END TEST dd_invalid_input_count 00:06:17.101 ************************************ 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.101 20:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:17.364 ************************************ 00:06:17.364 START TEST dd_invalid_output_count 00:06:17.364 ************************************ 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.364 20:29:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:17.364 { 00:06:17.364 "subsystems": [ 00:06:17.364 { 00:06:17.364 "subsystem": "bdev", 00:06:17.364 "config": [ 00:06:17.364 { 00:06:17.364 "params": { 00:06:17.364 "block_size": 512, 00:06:17.364 "num_blocks": 512, 00:06:17.364 "name": "malloc0" 00:06:17.364 }, 00:06:17.364 "method": "bdev_malloc_create" 00:06:17.364 }, 00:06:17.364 { 00:06:17.364 "method": "bdev_wait_for_examine" 00:06:17.364 } 00:06:17.364 ] 00:06:17.364 } 00:06:17.364 ] 00:06:17.364 } 00:06:17.364 [2024-11-26 20:29:31.693963] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:17.364 [2024-11-26 20:29:31.694031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61383 ] 00:06:17.364 [2024-11-26 20:29:31.834939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.364 [2024-11-26 20:29:31.878188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.626 [2024-11-26 20:29:31.919947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.626 [2024-11-26 20:29:31.972510] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:17.626 [2024-11-26 20:29:31.972569] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.626 [2024-11-26 20:29:32.055899] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.626 00:06:17.626 real 0m0.453s 00:06:17.626 user 0m0.267s 00:06:17.626 sys 0m0.114s 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:17.626 ************************************ 00:06:17.626 END TEST dd_invalid_output_count 00:06:17.626 ************************************ 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:17.626 ************************************ 00:06:17.626 START TEST dd_bs_not_multiple 00:06:17.626 ************************************ 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.626 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:17.933 { 00:06:17.933 "subsystems": [ 00:06:17.933 { 00:06:17.933 "subsystem": "bdev", 00:06:17.933 "config": [ 00:06:17.933 { 00:06:17.933 "params": { 00:06:17.933 "block_size": 512, 00:06:17.933 "num_blocks": 512, 00:06:17.933 "name": "malloc0" 00:06:17.933 }, 00:06:17.933 "method": "bdev_malloc_create" 00:06:17.933 }, 00:06:17.933 { 00:06:17.933 "params": { 00:06:17.933 "block_size": 512, 00:06:17.933 "num_blocks": 512, 00:06:17.933 "name": "malloc1" 00:06:17.933 }, 00:06:17.933 "method": "bdev_malloc_create" 00:06:17.933 }, 00:06:17.933 { 00:06:17.933 "method": "bdev_wait_for_examine" 00:06:17.933 } 00:06:17.933 ] 00:06:17.933 } 00:06:17.933 ] 00:06:17.933 } 00:06:17.933 [2024-11-26 20:29:32.218446] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:17.933 [2024-11-26 20:29:32.218511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61414 ] 00:06:17.933 [2024-11-26 20:29:32.359300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.933 [2024-11-26 20:29:32.401899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.933 [2024-11-26 20:29:32.443880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.196 [2024-11-26 20:29:32.503430] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:18.196 [2024-11-26 20:29:32.503492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.196 [2024-11-26 20:29:32.587886] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.196 00:06:18.196 real 0m0.467s 00:06:18.196 user 0m0.290s 00:06:18.196 sys 0m0.113s 00:06:18.196 ************************************ 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:18.196 END TEST dd_bs_not_multiple 00:06:18.196 ************************************ 00:06:18.196 00:06:18.196 real 0m5.329s 00:06:18.196 user 0m2.571s 00:06:18.196 sys 0m1.984s 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.196 ************************************ 00:06:18.196 END TEST spdk_dd_negative 00:06:18.196 20:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:18.196 ************************************ 00:06:18.196 00:06:18.196 real 1m12.527s 00:06:18.196 user 0m45.978s 00:06:18.196 sys 0m33.575s 00:06:18.196 20:29:32 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.196 ************************************ 00:06:18.196 END TEST spdk_dd 00:06:18.196 ************************************ 00:06:18.196 20:29:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 20:29:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:18.457 20:29:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:18.457 20:29:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:18.457 20:29:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.457 20:29:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 20:29:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:18.457 20:29:32 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:18.457 20:29:32 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:18.457 20:29:32 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:18.457 20:29:32 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:18.457 20:29:32 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:18.457 20:29:32 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.457 20:29:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.457 20:29:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.457 20:29:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 ************************************ 00:06:18.457 START TEST nvmf_tcp 00:06:18.457 ************************************ 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.457 * Looking for test storage... 00:06:18.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.457 20:29:32 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.457 --rc genhtml_branch_coverage=1 00:06:18.457 --rc genhtml_function_coverage=1 00:06:18.457 --rc genhtml_legend=1 00:06:18.457 --rc geninfo_all_blocks=1 00:06:18.457 --rc geninfo_unexecuted_blocks=1 00:06:18.457 00:06:18.457 ' 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.457 --rc genhtml_branch_coverage=1 00:06:18.457 --rc genhtml_function_coverage=1 00:06:18.457 --rc genhtml_legend=1 00:06:18.457 --rc geninfo_all_blocks=1 00:06:18.457 --rc geninfo_unexecuted_blocks=1 00:06:18.457 00:06:18.457 ' 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.457 --rc genhtml_branch_coverage=1 00:06:18.457 --rc genhtml_function_coverage=1 00:06:18.457 --rc genhtml_legend=1 00:06:18.457 --rc geninfo_all_blocks=1 00:06:18.457 --rc geninfo_unexecuted_blocks=1 00:06:18.457 00:06:18.457 ' 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.457 --rc genhtml_branch_coverage=1 00:06:18.457 --rc genhtml_function_coverage=1 00:06:18.457 --rc genhtml_legend=1 00:06:18.457 --rc geninfo_all_blocks=1 00:06:18.457 --rc geninfo_unexecuted_blocks=1 00:06:18.457 00:06:18.457 ' 00:06:18.457 20:29:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:18.457 20:29:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:18.457 20:29:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.457 20:29:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 ************************************ 00:06:18.457 START TEST nvmf_target_core 00:06:18.457 ************************************ 00:06:18.457 20:29:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:18.719 * Looking for test storage... 00:06:18.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.719 --rc genhtml_branch_coverage=1 00:06:18.719 --rc genhtml_function_coverage=1 00:06:18.719 --rc genhtml_legend=1 00:06:18.719 --rc geninfo_all_blocks=1 00:06:18.719 --rc geninfo_unexecuted_blocks=1 00:06:18.719 00:06:18.719 ' 00:06:18.719 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.719 --rc genhtml_branch_coverage=1 00:06:18.719 --rc genhtml_function_coverage=1 00:06:18.719 --rc genhtml_legend=1 00:06:18.719 --rc geninfo_all_blocks=1 00:06:18.719 --rc geninfo_unexecuted_blocks=1 00:06:18.719 00:06:18.720 ' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.720 --rc genhtml_branch_coverage=1 00:06:18.720 --rc genhtml_function_coverage=1 00:06:18.720 --rc genhtml_legend=1 00:06:18.720 --rc geninfo_all_blocks=1 00:06:18.720 --rc geninfo_unexecuted_blocks=1 00:06:18.720 00:06:18.720 ' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.720 --rc genhtml_branch_coverage=1 00:06:18.720 --rc genhtml_function_coverage=1 00:06:18.720 --rc genhtml_legend=1 00:06:18.720 --rc geninfo_all_blocks=1 00:06:18.720 --rc geninfo_unexecuted_blocks=1 00:06:18.720 00:06:18.720 ' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.720 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:18.720 ************************************ 00:06:18.720 START TEST nvmf_host_management 00:06:18.720 ************************************ 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:18.720 * Looking for test storage... 00:06:18.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.720 --rc genhtml_branch_coverage=1 00:06:18.720 --rc genhtml_function_coverage=1 00:06:18.720 --rc genhtml_legend=1 00:06:18.720 --rc geninfo_all_blocks=1 00:06:18.720 --rc geninfo_unexecuted_blocks=1 00:06:18.720 00:06:18.720 ' 00:06:18.720 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.720 --rc genhtml_branch_coverage=1 00:06:18.721 --rc genhtml_function_coverage=1 00:06:18.721 --rc genhtml_legend=1 00:06:18.721 --rc geninfo_all_blocks=1 00:06:18.721 --rc geninfo_unexecuted_blocks=1 00:06:18.721 00:06:18.721 ' 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.721 --rc genhtml_branch_coverage=1 00:06:18.721 --rc genhtml_function_coverage=1 00:06:18.721 --rc genhtml_legend=1 00:06:18.721 --rc geninfo_all_blocks=1 00:06:18.721 --rc geninfo_unexecuted_blocks=1 00:06:18.721 00:06:18.721 ' 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.721 --rc genhtml_branch_coverage=1 00:06:18.721 --rc genhtml_function_coverage=1 00:06:18.721 --rc genhtml_legend=1 00:06:18.721 --rc geninfo_all_blocks=1 00:06:18.721 --rc geninfo_unexecuted_blocks=1 00:06:18.721 00:06:18.721 ' 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.721 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.982 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:18.982 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:18.983 Cannot find device "nvmf_init_br" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:18.983 Cannot find device "nvmf_init_br2" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:18.983 Cannot find device "nvmf_tgt_br" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:18.983 Cannot find device "nvmf_tgt_br2" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:18.983 Cannot find device "nvmf_init_br" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:18.983 Cannot find device "nvmf_init_br2" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:18.983 Cannot find device "nvmf_tgt_br" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:18.983 Cannot find device "nvmf_tgt_br2" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:18.983 Cannot find device "nvmf_br" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:18.983 Cannot find device "nvmf_init_if" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:18.983 Cannot find device "nvmf_init_if2" 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:18.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:18.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:18.983 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:19.244 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:19.244 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:06:19.244 00:06:19.244 --- 10.0.0.3 ping statistics --- 00:06:19.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.244 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:19.244 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:19.244 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:06:19.244 00:06:19.244 --- 10.0.0.4 ping statistics --- 00:06:19.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.244 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:19.244 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:19.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:06:19.244 00:06:19.244 --- 10.0.0.1 ping statistics --- 00:06:19.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.244 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:19.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:06:19.245 00:06:19.245 --- 10.0.0.2 ping statistics --- 00:06:19.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.245 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=61740 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 61740 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 61740 ']' 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.245 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:19.245 [2024-11-26 20:29:33.696515] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:19.245 [2024-11-26 20:29:33.696573] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.506 [2024-11-26 20:29:33.842858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.506 [2024-11-26 20:29:33.886581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.506 [2024-11-26 20:29:33.886634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.506 [2024-11-26 20:29:33.886641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.506 [2024-11-26 20:29:33.886646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.506 [2024-11-26 20:29:33.886650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.506 [2024-11-26 20:29:33.887513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.506 [2024-11-26 20:29:33.888005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.506 [2024-11-26 20:29:33.888046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.506 [2024-11-26 20:29:33.888052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.506 [2024-11-26 20:29:33.923600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.073 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.073 [2024-11-26 20:29:34.620893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.333 Malloc0 00:06:20.333 [2024-11-26 20:29:34.693010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=61800 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 61800 /var/tmp/bdevperf.sock 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 61800 ']' 00:06:20.333 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:20.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:20.334 { 00:06:20.334 "params": { 00:06:20.334 "name": "Nvme$subsystem", 00:06:20.334 "trtype": "$TEST_TRANSPORT", 00:06:20.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:20.334 "adrfam": "ipv4", 00:06:20.334 "trsvcid": "$NVMF_PORT", 00:06:20.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:20.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:20.334 "hdgst": ${hdgst:-false}, 00:06:20.334 "ddgst": ${ddgst:-false} 00:06:20.334 }, 00:06:20.334 "method": "bdev_nvme_attach_controller" 00:06:20.334 } 00:06:20.334 EOF 00:06:20.334 )") 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:20.334 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:20.334 "params": { 00:06:20.334 "name": "Nvme0", 00:06:20.334 "trtype": "tcp", 00:06:20.334 "traddr": "10.0.0.3", 00:06:20.334 "adrfam": "ipv4", 00:06:20.334 "trsvcid": "4420", 00:06:20.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:20.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:20.334 "hdgst": false, 00:06:20.334 "ddgst": false 00:06:20.334 }, 00:06:20.334 "method": "bdev_nvme_attach_controller" 00:06:20.334 }' 00:06:20.334 [2024-11-26 20:29:34.770998] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:20.334 [2024-11-26 20:29:34.771084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61800 ] 00:06:20.595 [2024-11-26 20:29:34.900525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.595 [2024-11-26 20:29:34.934180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.595 [2024-11-26 20:29:34.972747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.595 Running I/O for 10 seconds... 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:21.165 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1411 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1411 -ge 100 ']' 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.426 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.426 [2024-11-26 20:29:35.749898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.426 [2024-11-26 20:29:35.749939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.426 [2024-11-26 20:29:35.749946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.749996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633e50 is same with the state(6) to be set 00:06:21.427 [2024-11-26 20:29:35.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.427 [2024-11-26 20:29:35.750333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.427 [2024-11-26 20:29:35.750347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.427 [2024-11-26 20:29:35.750352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.427 [2024-11-26 20:29:35.750359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.428 [2024-11-26 20:29:35.750684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.428 [2024-11-26 20:29:35.750689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.750995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.750999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.751005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.751010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.429 [2024-11-26 20:29:35.751017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.429 [2024-11-26 20:29:35.751021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.430 [2024-11-26 20:29:35.751027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.430 [2024-11-26 20:29:35.751032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.430 [2024-11-26 20:29:35.751037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d42d0 is same with the state(6) to be set 00:06:21.430 [2024-11-26 20:29:35.752006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:21.430 task offset: 49152 on job bdev=Nvme0n1 fails 00:06:21.430 00:06:21.430 Latency(us) 00:06:21.430 [2024-11-26T20:29:35.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:21.430 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:21.430 Job: Nvme0n1 ended in about 0.67 seconds with error 00:06:21.430 Verification LBA range: start 0x0 length 0x400 00:06:21.430 Nvme0n1 : 0.67 2086.27 130.39 94.83 0.00 28785.53 1940.87 28029.24 00:06:21.430 [2024-11-26T20:29:35.985Z] =================================================================================================================== 00:06:21.430 [2024-11-26T20:29:35.985Z] Total : 2086.27 130.39 94.83 0.00 28785.53 1940.87 28029.24 00:06:21.430 [2024-11-26 20:29:35.753664] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.430 [2024-11-26 20:29:35.753686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d9ce0 (9): Bad file descriptor 00:06:21.430 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.430 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.430 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.430 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.430 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.430 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:21.430 [2024-11-26 20:29:35.764638] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 61800 00:06:22.370 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (61800) - No such process 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:22.370 { 00:06:22.370 "params": { 00:06:22.370 "name": "Nvme$subsystem", 00:06:22.370 "trtype": "$TEST_TRANSPORT", 00:06:22.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:22.370 "adrfam": "ipv4", 00:06:22.370 "trsvcid": "$NVMF_PORT", 00:06:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:22.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:22.370 "hdgst": ${hdgst:-false}, 00:06:22.370 "ddgst": ${ddgst:-false} 00:06:22.370 }, 00:06:22.370 "method": "bdev_nvme_attach_controller" 00:06:22.370 } 00:06:22.370 EOF 00:06:22.370 )") 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:22.370 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:22.370 "params": { 00:06:22.370 "name": "Nvme0", 00:06:22.370 "trtype": "tcp", 00:06:22.370 "traddr": "10.0.0.3", 00:06:22.370 "adrfam": "ipv4", 00:06:22.370 "trsvcid": "4420", 00:06:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:22.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:22.370 "hdgst": false, 00:06:22.370 "ddgst": false 00:06:22.370 }, 00:06:22.370 "method": "bdev_nvme_attach_controller" 00:06:22.370 }' 00:06:22.370 [2024-11-26 20:29:36.803020] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:22.370 [2024-11-26 20:29:36.803226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61838 ] 00:06:22.632 [2024-11-26 20:29:36.943421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.632 [2024-11-26 20:29:36.981506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.632 [2024-11-26 20:29:37.022368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.632 Running I/O for 1 seconds... 00:06:24.027 1862.00 IOPS, 116.38 MiB/s 00:06:24.027 Latency(us) 00:06:24.027 [2024-11-26T20:29:38.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.027 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:24.027 Verification LBA range: start 0x0 length 0x400 00:06:24.027 Nvme0n1 : 1.04 1916.20 119.76 0.00 0.00 32812.64 3276.80 30247.38 00:06:24.027 [2024-11-26T20:29:38.582Z] =================================================================================================================== 00:06:24.027 [2024-11-26T20:29:38.582Z] Total : 1916.20 119.76 0.00 0.00 32812.64 3276.80 30247.38 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:24.027 rmmod nvme_tcp 00:06:24.027 rmmod nvme_fabrics 00:06:24.027 rmmod nvme_keyring 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 61740 ']' 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 61740 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 61740 ']' 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 61740 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61740 00:06:24.027 killing process with pid 61740 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61740' 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 61740 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 61740 00:06:24.027 [2024-11-26 20:29:38.536834] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:24.027 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.348 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:24.349 00:06:24.349 real 0m5.639s 00:06:24.349 user 0m21.185s 00:06:24.349 sys 0m1.204s 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.349 ************************************ 00:06:24.349 END TEST nvmf_host_management 00:06:24.349 ************************************ 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.349 ************************************ 00:06:24.349 START TEST nvmf_lvol 00:06:24.349 ************************************ 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:24.349 * Looking for test storage... 00:06:24.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.349 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.609 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.610 --rc genhtml_branch_coverage=1 00:06:24.610 --rc genhtml_function_coverage=1 00:06:24.610 --rc genhtml_legend=1 00:06:24.610 --rc geninfo_all_blocks=1 00:06:24.610 --rc geninfo_unexecuted_blocks=1 00:06:24.610 00:06:24.610 ' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.610 --rc genhtml_branch_coverage=1 00:06:24.610 --rc genhtml_function_coverage=1 00:06:24.610 --rc genhtml_legend=1 00:06:24.610 --rc geninfo_all_blocks=1 00:06:24.610 --rc geninfo_unexecuted_blocks=1 00:06:24.610 00:06:24.610 ' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.610 --rc genhtml_branch_coverage=1 00:06:24.610 --rc genhtml_function_coverage=1 00:06:24.610 --rc genhtml_legend=1 00:06:24.610 --rc geninfo_all_blocks=1 00:06:24.610 --rc geninfo_unexecuted_blocks=1 00:06:24.610 00:06:24.610 ' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.610 --rc genhtml_branch_coverage=1 00:06:24.610 --rc genhtml_function_coverage=1 00:06:24.610 --rc genhtml_legend=1 00:06:24.610 --rc geninfo_all_blocks=1 00:06:24.610 --rc geninfo_unexecuted_blocks=1 00:06:24.610 00:06:24.610 ' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:24.610 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:24.611 Cannot find device "nvmf_init_br" 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:24.611 Cannot find device "nvmf_init_br2" 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:24.611 Cannot find device "nvmf_tgt_br" 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:24.611 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:24.611 Cannot find device "nvmf_tgt_br2" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:24.611 Cannot find device "nvmf_init_br" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:24.611 Cannot find device "nvmf_init_br2" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:24.611 Cannot find device "nvmf_tgt_br" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:24.611 Cannot find device "nvmf_tgt_br2" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:24.611 Cannot find device "nvmf_br" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:24.611 Cannot find device "nvmf_init_if" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:24.611 Cannot find device "nvmf_init_if2" 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:24.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:24.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:24.611 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:24.868 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:24.868 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:06:24.868 00:06:24.868 --- 10.0.0.3 ping statistics --- 00:06:24.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.868 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:24.868 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:24.868 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:06:24.868 00:06:24.868 --- 10.0.0.4 ping statistics --- 00:06:24.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.868 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:24.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:06:24.868 00:06:24.868 --- 10.0.0.1 ping statistics --- 00:06:24.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.868 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:24.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:06:24.868 00:06:24.868 --- 10.0.0.2 ping statistics --- 00:06:24.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.868 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62093 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62093 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62093 ']' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.868 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:24.868 [2024-11-26 20:29:39.310789] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:24.868 [2024-11-26 20:29:39.310852] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.125 [2024-11-26 20:29:39.452284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.125 [2024-11-26 20:29:39.490119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.125 [2024-11-26 20:29:39.490158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.125 [2024-11-26 20:29:39.490164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.125 [2024-11-26 20:29:39.490169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.125 [2024-11-26 20:29:39.490174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.125 [2024-11-26 20:29:39.490867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.125 [2024-11-26 20:29:39.491022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.125 [2024-11-26 20:29:39.491029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.125 [2024-11-26 20:29:39.523540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.690 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.690 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:25.690 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:25.690 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.690 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.690 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.690 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:25.948 [2024-11-26 20:29:40.335245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.948 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.206 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:26.206 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.464 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:26.464 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:26.464 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:26.722 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c2306926-a471-450a-a2b9-356e630f68b2 00:06:26.722 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c2306926-a471-450a-a2b9-356e630f68b2 lvol 20 00:06:26.981 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5a434ea4-5e8a-4ed2-bcb2-9535396bd179 00:06:26.981 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:27.265 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a434ea4-5e8a-4ed2-bcb2-9535396bd179 00:06:27.265 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:27.522 [2024-11-26 20:29:41.939864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:27.522 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:27.780 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62163 00:06:27.780 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:27.780 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:28.732 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 5a434ea4-5e8a-4ed2-bcb2-9535396bd179 MY_SNAPSHOT 00:06:28.990 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=463b5151-9f46-4768-95f7-b9a198259418 00:06:28.990 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 5a434ea4-5e8a-4ed2-bcb2-9535396bd179 30 00:06:29.249 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 463b5151-9f46-4768-95f7-b9a198259418 MY_CLONE 00:06:29.249 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=410f316f-e53b-40ba-bb5d-888f855a5f16 00:06:29.249 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 410f316f-e53b-40ba-bb5d-888f855a5f16 00:06:29.814 20:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62163 00:06:37.915 Initializing NVMe Controllers 00:06:37.915 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:37.915 Controller IO queue size 128, less than required. 00:06:37.915 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.915 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:37.915 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:37.915 Initialization complete. Launching workers. 00:06:37.915 ======================================================== 00:06:37.915 Latency(us) 00:06:37.915 Device Information : IOPS MiB/s Average min max 00:06:37.915 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14547.00 56.82 8801.17 2155.50 58478.35 00:06:37.915 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15832.20 61.84 8088.82 338.64 52856.43 00:06:37.915 ======================================================== 00:06:37.915 Total : 30379.20 118.67 8429.93 338.64 58478.35 00:06:37.915 00:06:37.915 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:38.172 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5a434ea4-5e8a-4ed2-bcb2-9535396bd179 00:06:38.436 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2306926-a471-450a-a2b9-356e630f68b2 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.693 rmmod nvme_tcp 00:06:38.693 rmmod nvme_fabrics 00:06:38.693 rmmod nvme_keyring 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62093 ']' 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62093 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62093 ']' 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62093 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62093 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.693 killing process with pid 62093 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62093' 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62093 00:06:38.693 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62093 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.951 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:06:39.210 00:06:39.210 real 0m14.743s 00:06:39.210 user 1m1.740s 00:06:39.210 sys 0m3.425s 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:39.210 ************************************ 00:06:39.210 END TEST nvmf_lvol 00:06:39.210 ************************************ 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:39.210 ************************************ 00:06:39.210 START TEST nvmf_lvs_grow 00:06:39.210 ************************************ 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:39.210 * Looking for test storage... 00:06:39.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.210 --rc genhtml_branch_coverage=1 00:06:39.210 --rc genhtml_function_coverage=1 00:06:39.210 --rc genhtml_legend=1 00:06:39.210 --rc geninfo_all_blocks=1 00:06:39.210 --rc geninfo_unexecuted_blocks=1 00:06:39.210 00:06:39.210 ' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.210 --rc genhtml_branch_coverage=1 00:06:39.210 --rc genhtml_function_coverage=1 00:06:39.210 --rc genhtml_legend=1 00:06:39.210 --rc geninfo_all_blocks=1 00:06:39.210 --rc geninfo_unexecuted_blocks=1 00:06:39.210 00:06:39.210 ' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.210 --rc genhtml_branch_coverage=1 00:06:39.210 --rc genhtml_function_coverage=1 00:06:39.210 --rc genhtml_legend=1 00:06:39.210 --rc geninfo_all_blocks=1 00:06:39.210 --rc geninfo_unexecuted_blocks=1 00:06:39.210 00:06:39.210 ' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.210 --rc genhtml_branch_coverage=1 00:06:39.210 --rc genhtml_function_coverage=1 00:06:39.210 --rc genhtml_legend=1 00:06:39.210 --rc geninfo_all_blocks=1 00:06:39.210 --rc geninfo_unexecuted_blocks=1 00:06:39.210 00:06:39.210 ' 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:39.210 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:39.468 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.468 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.468 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:39.469 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:39.470 Cannot find device "nvmf_init_br" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:39.470 Cannot find device "nvmf_init_br2" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:39.470 Cannot find device "nvmf_tgt_br" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:39.470 Cannot find device "nvmf_tgt_br2" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:39.470 Cannot find device "nvmf_init_br" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:39.470 Cannot find device "nvmf_init_br2" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:39.470 Cannot find device "nvmf_tgt_br" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:39.470 Cannot find device "nvmf_tgt_br2" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:39.470 Cannot find device "nvmf_br" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:39.470 Cannot find device "nvmf_init_if" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:39.470 Cannot find device "nvmf_init_if2" 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:39.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:39.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:39.470 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:39.470 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:39.470 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:39.470 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:39.470 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:39.470 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:39.470 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:39.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:39.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:06:39.729 00:06:39.729 --- 10.0.0.3 ping statistics --- 00:06:39.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.729 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:39.729 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:39.729 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:06:39.729 00:06:39.729 --- 10.0.0.4 ping statistics --- 00:06:39.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.729 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:39.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:06:39.729 00:06:39.729 --- 10.0.0.1 ping statistics --- 00:06:39.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.729 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:39.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:06:39.729 00:06:39.729 --- 10.0.0.2 ping statistics --- 00:06:39.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.729 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=62548 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 62548 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 62548 ']' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:39.729 20:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.729 [2024-11-26 20:29:54.153647] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:39.729 [2024-11-26 20:29:54.153710] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.987 [2024-11-26 20:29:54.296244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.987 [2024-11-26 20:29:54.333328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.987 [2024-11-26 20:29:54.333373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.987 [2024-11-26 20:29:54.333380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.987 [2024-11-26 20:29:54.333385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.987 [2024-11-26 20:29:54.333389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.987 [2024-11-26 20:29:54.333666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.987 [2024-11-26 20:29:54.366551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.553 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.553 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:40.553 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:40.553 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.553 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.553 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.553 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:40.811 [2024-11-26 20:29:55.276804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.812 ************************************ 00:06:40.812 START TEST lvs_grow_clean 00:06:40.812 ************************************ 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:40.812 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:41.070 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:41.070 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:41.373 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:41.373 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:41.373 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:41.631 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:41.631 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:41.631 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 lvol 150 00:06:41.631 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4e2ef493-797b-4325-8ede-5a2942016c69 00:06:41.631 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:41.631 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:41.890 [2024-11-26 20:29:56.353547] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:41.890 [2024-11-26 20:29:56.353628] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:41.890 true 00:06:41.890 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:41.890 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:42.149 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:42.149 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:42.407 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e2ef493-797b-4325-8ede-5a2942016c69 00:06:42.666 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:42.666 [2024-11-26 20:29:57.169998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:42.666 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:42.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62625 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62625 /var/tmp/bdevperf.sock 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 62625 ']' 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.924 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:42.924 [2024-11-26 20:29:57.426468] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:42.924 [2024-11-26 20:29:57.426538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62625 ] 00:06:43.182 [2024-11-26 20:29:57.566867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.182 [2024-11-26 20:29:57.604236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.182 [2024-11-26 20:29:57.636882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.115 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.115 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:44.115 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:44.115 Nvme0n1 00:06:44.115 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:44.374 [ 00:06:44.374 { 00:06:44.374 "name": "Nvme0n1", 00:06:44.374 "aliases": [ 00:06:44.374 "4e2ef493-797b-4325-8ede-5a2942016c69" 00:06:44.374 ], 00:06:44.374 "product_name": "NVMe disk", 00:06:44.374 "block_size": 4096, 00:06:44.374 "num_blocks": 38912, 00:06:44.374 "uuid": "4e2ef493-797b-4325-8ede-5a2942016c69", 00:06:44.374 "numa_id": -1, 00:06:44.374 "assigned_rate_limits": { 00:06:44.374 "rw_ios_per_sec": 0, 00:06:44.374 "rw_mbytes_per_sec": 0, 00:06:44.374 "r_mbytes_per_sec": 0, 00:06:44.374 "w_mbytes_per_sec": 0 00:06:44.374 }, 00:06:44.374 "claimed": false, 00:06:44.374 "zoned": false, 00:06:44.374 "supported_io_types": { 00:06:44.374 "read": true, 00:06:44.374 "write": true, 00:06:44.374 "unmap": true, 00:06:44.374 "flush": true, 00:06:44.374 "reset": true, 00:06:44.374 "nvme_admin": true, 00:06:44.374 "nvme_io": true, 00:06:44.374 "nvme_io_md": false, 00:06:44.374 "write_zeroes": true, 00:06:44.374 "zcopy": false, 00:06:44.374 "get_zone_info": false, 00:06:44.374 "zone_management": false, 00:06:44.374 "zone_append": false, 00:06:44.374 "compare": true, 00:06:44.374 "compare_and_write": true, 00:06:44.374 "abort": true, 00:06:44.374 "seek_hole": false, 00:06:44.374 "seek_data": false, 00:06:44.374 "copy": true, 00:06:44.374 "nvme_iov_md": false 00:06:44.374 }, 00:06:44.374 "memory_domains": [ 00:06:44.374 { 00:06:44.374 "dma_device_id": "system", 00:06:44.374 "dma_device_type": 1 00:06:44.374 } 00:06:44.374 ], 00:06:44.374 "driver_specific": { 00:06:44.374 "nvme": [ 00:06:44.374 { 00:06:44.374 "trid": { 00:06:44.374 "trtype": "TCP", 00:06:44.374 "adrfam": "IPv4", 00:06:44.374 "traddr": "10.0.0.3", 00:06:44.374 "trsvcid": "4420", 00:06:44.374 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:44.374 }, 00:06:44.374 "ctrlr_data": { 00:06:44.374 "cntlid": 1, 00:06:44.374 "vendor_id": "0x8086", 00:06:44.374 "model_number": "SPDK bdev Controller", 00:06:44.374 "serial_number": "SPDK0", 00:06:44.374 "firmware_revision": "25.01", 00:06:44.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:44.374 "oacs": { 00:06:44.374 "security": 0, 00:06:44.374 "format": 0, 00:06:44.374 "firmware": 0, 00:06:44.374 "ns_manage": 0 00:06:44.374 }, 00:06:44.374 "multi_ctrlr": true, 00:06:44.374 "ana_reporting": false 00:06:44.374 }, 00:06:44.374 "vs": { 00:06:44.374 "nvme_version": "1.3" 00:06:44.374 }, 00:06:44.374 "ns_data": { 00:06:44.374 "id": 1, 00:06:44.374 "can_share": true 00:06:44.374 } 00:06:44.374 } 00:06:44.374 ], 00:06:44.374 "mp_policy": "active_passive" 00:06:44.374 } 00:06:44.374 } 00:06:44.374 ] 00:06:44.374 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:44.374 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62643 00:06:44.374 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:44.374 Running I/O for 10 seconds... 00:06:45.766 Latency(us) 00:06:45.766 [2024-11-26T20:30:00.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.766 Nvme0n1 : 1.00 8834.00 34.51 0.00 0.00 0.00 0.00 0.00 00:06:45.766 [2024-11-26T20:30:00.321Z] =================================================================================================================== 00:06:45.766 [2024-11-26T20:30:00.321Z] Total : 8834.00 34.51 0.00 0.00 0.00 0.00 0.00 00:06:45.766 00:06:46.372 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:46.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.372 Nvme0n1 : 2.00 9624.00 37.59 0.00 0.00 0.00 0.00 0.00 00:06:46.372 [2024-11-26T20:30:00.927Z] =================================================================================================================== 00:06:46.372 [2024-11-26T20:30:00.927Z] Total : 9624.00 37.59 0.00 0.00 0.00 0.00 0.00 00:06:46.372 00:06:46.631 true 00:06:46.631 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:46.631 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:46.889 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:46.889 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:46.889 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 62643 00:06:47.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.454 Nvme0n1 : 3.00 9972.00 38.95 0.00 0.00 0.00 0.00 0.00 00:06:47.454 [2024-11-26T20:30:02.009Z] =================================================================================================================== 00:06:47.454 [2024-11-26T20:30:02.009Z] Total : 9972.00 38.95 0.00 0.00 0.00 0.00 0.00 00:06:47.454 00:06:48.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.387 Nvme0n1 : 4.00 9863.75 38.53 0.00 0.00 0.00 0.00 0.00 00:06:48.387 [2024-11-26T20:30:02.942Z] =================================================================================================================== 00:06:48.387 [2024-11-26T20:30:02.942Z] Total : 9863.75 38.53 0.00 0.00 0.00 0.00 0.00 00:06:48.387 00:06:49.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.759 Nvme0n1 : 5.00 10071.20 39.34 0.00 0.00 0.00 0.00 0.00 00:06:49.759 [2024-11-26T20:30:04.314Z] =================================================================================================================== 00:06:49.759 [2024-11-26T20:30:04.314Z] Total : 10071.20 39.34 0.00 0.00 0.00 0.00 0.00 00:06:49.759 00:06:50.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.691 Nvme0n1 : 6.00 10146.17 39.63 0.00 0.00 0.00 0.00 0.00 00:06:50.691 [2024-11-26T20:30:05.246Z] =================================================================================================================== 00:06:50.691 [2024-11-26T20:30:05.246Z] Total : 10146.17 39.63 0.00 0.00 0.00 0.00 0.00 00:06:50.691 00:06:51.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.651 Nvme0n1 : 7.00 10026.57 39.17 0.00 0.00 0.00 0.00 0.00 00:06:51.651 [2024-11-26T20:30:06.206Z] =================================================================================================================== 00:06:51.651 [2024-11-26T20:30:06.206Z] Total : 10026.57 39.17 0.00 0.00 0.00 0.00 0.00 00:06:51.651 00:06:52.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.583 Nvme0n1 : 8.00 10056.12 39.28 0.00 0.00 0.00 0.00 0.00 00:06:52.583 [2024-11-26T20:30:07.138Z] =================================================================================================================== 00:06:52.583 [2024-11-26T20:30:07.138Z] Total : 10056.12 39.28 0.00 0.00 0.00 0.00 0.00 00:06:52.583 00:06:53.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.514 Nvme0n1 : 9.00 10105.33 39.47 0.00 0.00 0.00 0.00 0.00 00:06:53.514 [2024-11-26T20:30:08.069Z] =================================================================================================================== 00:06:53.514 [2024-11-26T20:30:08.069Z] Total : 10105.33 39.47 0.00 0.00 0.00 0.00 0.00 00:06:53.514 00:06:54.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.446 Nvme0n1 : 10.00 10130.10 39.57 0.00 0.00 0.00 0.00 0.00 00:06:54.446 [2024-11-26T20:30:09.001Z] =================================================================================================================== 00:06:54.446 [2024-11-26T20:30:09.001Z] Total : 10130.10 39.57 0.00 0.00 0.00 0.00 0.00 00:06:54.446 00:06:54.447 00:06:54.447 Latency(us) 00:06:54.447 [2024-11-26T20:30:09.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.447 Nvme0n1 : 10.00 10139.11 39.61 0.00 0.00 12620.15 1235.10 144380.85 00:06:54.447 [2024-11-26T20:30:09.002Z] =================================================================================================================== 00:06:54.447 [2024-11-26T20:30:09.002Z] Total : 10139.11 39.61 0.00 0.00 12620.15 1235.10 144380.85 00:06:54.447 { 00:06:54.447 "results": [ 00:06:54.447 { 00:06:54.447 "job": "Nvme0n1", 00:06:54.447 "core_mask": "0x2", 00:06:54.447 "workload": "randwrite", 00:06:54.447 "status": "finished", 00:06:54.447 "queue_depth": 128, 00:06:54.447 "io_size": 4096, 00:06:54.447 "runtime": 10.003734, 00:06:54.447 "iops": 10139.114054811933, 00:06:54.447 "mibps": 39.60591427660911, 00:06:54.447 "io_failed": 0, 00:06:54.447 "io_timeout": 0, 00:06:54.447 "avg_latency_us": 12620.15240927151, 00:06:54.447 "min_latency_us": 1235.1015384615384, 00:06:54.447 "max_latency_us": 144380.84923076924 00:06:54.447 } 00:06:54.447 ], 00:06:54.447 "core_count": 1 00:06:54.447 } 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62625 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 62625 ']' 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 62625 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62625 00:06:54.447 killing process with pid 62625 00:06:54.447 Received shutdown signal, test time was about 10.000000 seconds 00:06:54.447 00:06:54.447 Latency(us) 00:06:54.447 [2024-11-26T20:30:09.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.447 [2024-11-26T20:30:09.002Z] =================================================================================================================== 00:06:54.447 [2024-11-26T20:30:09.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62625' 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 62625 00:06:54.447 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 62625 00:06:54.704 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:54.704 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.961 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:54.961 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:55.219 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:55.219 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:55.219 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:55.476 [2024-11-26 20:30:09.868028] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:55.476 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:55.734 request: 00:06:55.734 { 00:06:55.734 "uuid": "74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0", 00:06:55.734 "method": "bdev_lvol_get_lvstores", 00:06:55.734 "req_id": 1 00:06:55.734 } 00:06:55.734 Got JSON-RPC error response 00:06:55.734 response: 00:06:55.734 { 00:06:55.734 "code": -19, 00:06:55.734 "message": "No such device" 00:06:55.734 } 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:55.734 aio_bdev 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4e2ef493-797b-4325-8ede-5a2942016c69 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4e2ef493-797b-4325-8ede-5a2942016c69 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:55.734 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:55.992 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e2ef493-797b-4325-8ede-5a2942016c69 -t 2000 00:06:56.249 [ 00:06:56.249 { 00:06:56.249 "name": "4e2ef493-797b-4325-8ede-5a2942016c69", 00:06:56.249 "aliases": [ 00:06:56.249 "lvs/lvol" 00:06:56.249 ], 00:06:56.249 "product_name": "Logical Volume", 00:06:56.249 "block_size": 4096, 00:06:56.249 "num_blocks": 38912, 00:06:56.249 "uuid": "4e2ef493-797b-4325-8ede-5a2942016c69", 00:06:56.249 "assigned_rate_limits": { 00:06:56.249 "rw_ios_per_sec": 0, 00:06:56.249 "rw_mbytes_per_sec": 0, 00:06:56.249 "r_mbytes_per_sec": 0, 00:06:56.249 "w_mbytes_per_sec": 0 00:06:56.249 }, 00:06:56.249 "claimed": false, 00:06:56.249 "zoned": false, 00:06:56.249 "supported_io_types": { 00:06:56.249 "read": true, 00:06:56.249 "write": true, 00:06:56.249 "unmap": true, 00:06:56.249 "flush": false, 00:06:56.249 "reset": true, 00:06:56.249 "nvme_admin": false, 00:06:56.249 "nvme_io": false, 00:06:56.249 "nvme_io_md": false, 00:06:56.249 "write_zeroes": true, 00:06:56.249 "zcopy": false, 00:06:56.249 "get_zone_info": false, 00:06:56.249 "zone_management": false, 00:06:56.249 "zone_append": false, 00:06:56.249 "compare": false, 00:06:56.249 "compare_and_write": false, 00:06:56.249 "abort": false, 00:06:56.249 "seek_hole": true, 00:06:56.249 "seek_data": true, 00:06:56.249 "copy": false, 00:06:56.249 "nvme_iov_md": false 00:06:56.249 }, 00:06:56.249 "driver_specific": { 00:06:56.249 "lvol": { 00:06:56.249 "lvol_store_uuid": "74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0", 00:06:56.249 "base_bdev": "aio_bdev", 00:06:56.249 "thin_provision": false, 00:06:56.249 "num_allocated_clusters": 38, 00:06:56.249 "snapshot": false, 00:06:56.249 "clone": false, 00:06:56.249 "esnap_clone": false 00:06:56.249 } 00:06:56.249 } 00:06:56.249 } 00:06:56.249 ] 00:06:56.249 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:56.249 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:56.249 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:56.507 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:56.507 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:56.507 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:56.765 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:56.765 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4e2ef493-797b-4325-8ede-5a2942016c69 00:06:56.765 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 74bc76d1-1454-4a9c-81b6-d5b1c3d79ef0 00:06:57.061 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:57.061 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:57.626 ************************************ 00:06:57.626 END TEST lvs_grow_clean 00:06:57.626 ************************************ 00:06:57.626 00:06:57.626 real 0m16.622s 00:06:57.626 user 0m15.774s 00:06:57.626 sys 0m1.917s 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:57.626 ************************************ 00:06:57.626 START TEST lvs_grow_dirty 00:06:57.626 ************************************ 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:57.626 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:57.884 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:57.884 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:57.884 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:06:57.884 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:57.884 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:06:58.143 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:58.143 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:58.143 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 lvol 150 00:06:58.401 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a 00:06:58.401 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:58.401 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:58.658 [2024-11-26 20:30:13.021679] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:58.658 [2024-11-26 20:30:13.021740] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:58.658 true 00:06:58.658 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:06:58.659 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:58.917 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:58.917 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:58.917 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a 00:06:59.175 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:59.432 [2024-11-26 20:30:13.842059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:59.432 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:59.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62881 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62881 /var/tmp/bdevperf.sock 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 62881 ']' 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.690 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:59.690 [2024-11-26 20:30:14.118993] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:06:59.690 [2024-11-26 20:30:14.119246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62881 ] 00:06:59.949 [2024-11-26 20:30:14.257276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.949 [2024-11-26 20:30:14.310790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.949 [2024-11-26 20:30:14.346473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.514 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.514 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:00.514 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:00.772 Nvme0n1 00:07:00.772 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:01.029 [ 00:07:01.029 { 00:07:01.029 "name": "Nvme0n1", 00:07:01.029 "aliases": [ 00:07:01.030 "cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a" 00:07:01.030 ], 00:07:01.030 "product_name": "NVMe disk", 00:07:01.030 "block_size": 4096, 00:07:01.030 "num_blocks": 38912, 00:07:01.030 "uuid": "cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a", 00:07:01.030 "numa_id": -1, 00:07:01.030 "assigned_rate_limits": { 00:07:01.030 "rw_ios_per_sec": 0, 00:07:01.030 "rw_mbytes_per_sec": 0, 00:07:01.030 "r_mbytes_per_sec": 0, 00:07:01.030 "w_mbytes_per_sec": 0 00:07:01.030 }, 00:07:01.030 "claimed": false, 00:07:01.030 "zoned": false, 00:07:01.030 "supported_io_types": { 00:07:01.030 "read": true, 00:07:01.030 "write": true, 00:07:01.030 "unmap": true, 00:07:01.030 "flush": true, 00:07:01.030 "reset": true, 00:07:01.030 "nvme_admin": true, 00:07:01.030 "nvme_io": true, 00:07:01.030 "nvme_io_md": false, 00:07:01.030 "write_zeroes": true, 00:07:01.030 "zcopy": false, 00:07:01.030 "get_zone_info": false, 00:07:01.030 "zone_management": false, 00:07:01.030 "zone_append": false, 00:07:01.030 "compare": true, 00:07:01.030 "compare_and_write": true, 00:07:01.030 "abort": true, 00:07:01.030 "seek_hole": false, 00:07:01.030 "seek_data": false, 00:07:01.030 "copy": true, 00:07:01.030 "nvme_iov_md": false 00:07:01.030 }, 00:07:01.030 "memory_domains": [ 00:07:01.030 { 00:07:01.030 "dma_device_id": "system", 00:07:01.030 "dma_device_type": 1 00:07:01.030 } 00:07:01.030 ], 00:07:01.030 "driver_specific": { 00:07:01.030 "nvme": [ 00:07:01.030 { 00:07:01.030 "trid": { 00:07:01.030 "trtype": "TCP", 00:07:01.030 "adrfam": "IPv4", 00:07:01.030 "traddr": "10.0.0.3", 00:07:01.030 "trsvcid": "4420", 00:07:01.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:01.030 }, 00:07:01.030 "ctrlr_data": { 00:07:01.030 "cntlid": 1, 00:07:01.030 "vendor_id": "0x8086", 00:07:01.030 "model_number": "SPDK bdev Controller", 00:07:01.030 "serial_number": "SPDK0", 00:07:01.030 "firmware_revision": "25.01", 00:07:01.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.030 "oacs": { 00:07:01.030 "security": 0, 00:07:01.030 "format": 0, 00:07:01.030 "firmware": 0, 00:07:01.030 "ns_manage": 0 00:07:01.030 }, 00:07:01.030 "multi_ctrlr": true, 00:07:01.030 "ana_reporting": false 00:07:01.030 }, 00:07:01.030 "vs": { 00:07:01.030 "nvme_version": "1.3" 00:07:01.030 }, 00:07:01.030 "ns_data": { 00:07:01.030 "id": 1, 00:07:01.030 "can_share": true 00:07:01.030 } 00:07:01.030 } 00:07:01.030 ], 00:07:01.030 "mp_policy": "active_passive" 00:07:01.030 } 00:07:01.030 } 00:07:01.030 ] 00:07:01.030 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62905 00:07:01.030 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:01.030 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:01.287 Running I/O for 10 seconds... 00:07:02.221 Latency(us) 00:07:02.221 [2024-11-26T20:30:16.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.221 Nvme0n1 : 1.00 10922.00 42.66 0.00 0.00 0.00 0.00 0.00 00:07:02.221 [2024-11-26T20:30:16.776Z] =================================================================================================================== 00:07:02.221 [2024-11-26T20:30:16.776Z] Total : 10922.00 42.66 0.00 0.00 0.00 0.00 0.00 00:07:02.221 00:07:03.152 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:03.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.152 Nvme0n1 : 2.00 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:07:03.152 [2024-11-26T20:30:17.707Z] =================================================================================================================== 00:07:03.152 [2024-11-26T20:30:17.707Z] Total : 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:07:03.152 00:07:03.421 true 00:07:03.421 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:03.421 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:03.706 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:03.707 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:03.707 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 62905 00:07:04.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.273 Nvme0n1 : 3.00 10710.00 41.84 0.00 0.00 0.00 0.00 0.00 00:07:04.273 [2024-11-26T20:30:18.828Z] =================================================================================================================== 00:07:04.273 [2024-11-26T20:30:18.828Z] Total : 10710.00 41.84 0.00 0.00 0.00 0.00 0.00 00:07:04.273 00:07:05.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.208 Nvme0n1 : 4.00 10665.75 41.66 0.00 0.00 0.00 0.00 0.00 00:07:05.208 [2024-11-26T20:30:19.763Z] =================================================================================================================== 00:07:05.208 [2024-11-26T20:30:19.763Z] Total : 10665.75 41.66 0.00 0.00 0.00 0.00 0.00 00:07:05.208 00:07:06.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.141 Nvme0n1 : 5.00 9508.20 37.14 0.00 0.00 0.00 0.00 0.00 00:07:06.141 [2024-11-26T20:30:20.696Z] =================================================================================================================== 00:07:06.141 [2024-11-26T20:30:20.696Z] Total : 9508.20 37.14 0.00 0.00 0.00 0.00 0.00 00:07:06.141 00:07:07.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.074 Nvme0n1 : 6.00 9638.00 37.65 0.00 0.00 0.00 0.00 0.00 00:07:07.074 [2024-11-26T20:30:21.629Z] =================================================================================================================== 00:07:07.074 [2024-11-26T20:30:21.629Z] Total : 9638.00 37.65 0.00 0.00 0.00 0.00 0.00 00:07:07.074 00:07:08.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.449 Nvme0n1 : 7.00 9767.00 38.15 0.00 0.00 0.00 0.00 0.00 00:07:08.449 [2024-11-26T20:30:23.004Z] =================================================================================================================== 00:07:08.449 [2024-11-26T20:30:23.004Z] Total : 9767.00 38.15 0.00 0.00 0.00 0.00 0.00 00:07:08.449 00:07:09.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.382 Nvme0n1 : 8.00 9895.50 38.65 0.00 0.00 0.00 0.00 0.00 00:07:09.382 [2024-11-26T20:30:23.937Z] =================================================================================================================== 00:07:09.382 [2024-11-26T20:30:23.937Z] Total : 9895.50 38.65 0.00 0.00 0.00 0.00 0.00 00:07:09.382 00:07:10.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.315 Nvme0n1 : 9.00 9791.33 38.25 0.00 0.00 0.00 0.00 0.00 00:07:10.315 [2024-11-26T20:30:24.870Z] =================================================================================================================== 00:07:10.315 [2024-11-26T20:30:24.870Z] Total : 9791.33 38.25 0.00 0.00 0.00 0.00 0.00 00:07:10.315 00:07:11.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.248 Nvme0n1 : 10.00 9866.30 38.54 0.00 0.00 0.00 0.00 0.00 00:07:11.248 [2024-11-26T20:30:25.803Z] =================================================================================================================== 00:07:11.248 [2024-11-26T20:30:25.803Z] Total : 9866.30 38.54 0.00 0.00 0.00 0.00 0.00 00:07:11.248 00:07:11.248 00:07:11.248 Latency(us) 00:07:11.248 [2024-11-26T20:30:25.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.248 Nvme0n1 : 10.01 9868.16 38.55 0.00 0.00 12966.95 6024.27 551712.30 00:07:11.248 [2024-11-26T20:30:25.803Z] =================================================================================================================== 00:07:11.248 [2024-11-26T20:30:25.803Z] Total : 9868.16 38.55 0.00 0.00 12966.95 6024.27 551712.30 00:07:11.248 { 00:07:11.248 "results": [ 00:07:11.248 { 00:07:11.248 "job": "Nvme0n1", 00:07:11.248 "core_mask": "0x2", 00:07:11.248 "workload": "randwrite", 00:07:11.248 "status": "finished", 00:07:11.248 "queue_depth": 128, 00:07:11.248 "io_size": 4096, 00:07:11.248 "runtime": 10.011089, 00:07:11.248 "iops": 9868.157200480387, 00:07:11.248 "mibps": 38.547489064376514, 00:07:11.248 "io_failed": 0, 00:07:11.248 "io_timeout": 0, 00:07:11.248 "avg_latency_us": 12966.953884650033, 00:07:11.248 "min_latency_us": 6024.2707692307695, 00:07:11.248 "max_latency_us": 551712.2953846154 00:07:11.248 } 00:07:11.248 ], 00:07:11.248 "core_count": 1 00:07:11.248 } 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62881 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 62881 ']' 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 62881 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62881 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62881' 00:07:11.248 killing process with pid 62881 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 62881 00:07:11.248 Received shutdown signal, test time was about 10.000000 seconds 00:07:11.248 00:07:11.248 Latency(us) 00:07:11.248 [2024-11-26T20:30:25.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.248 [2024-11-26T20:30:25.803Z] =================================================================================================================== 00:07:11.248 [2024-11-26T20:30:25.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 62881 00:07:11.248 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:11.506 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:11.764 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:11.764 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62548 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62548 00:07:12.022 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62548 Killed "${NVMF_APP[@]}" "$@" 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.022 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63033 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63033 00:07:12.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63033 ']' 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.023 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.023 [2024-11-26 20:30:26.475706] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:12.023 [2024-11-26 20:30:26.475862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.280 [2024-11-26 20:30:26.611496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.280 [2024-11-26 20:30:26.643426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.280 [2024-11-26 20:30:26.643550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.280 [2024-11-26 20:30:26.643608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.280 [2024-11-26 20:30:26.643648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.280 [2024-11-26 20:30:26.643661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.280 [2024-11-26 20:30:26.643914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.280 [2024-11-26 20:30:26.673337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.280 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.280 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:12.280 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:12.280 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.280 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.280 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.280 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.537 [2024-11-26 20:30:26.980034] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:12.537 [2024-11-26 20:30:26.980331] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:12.537 [2024-11-26 20:30:26.980419] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:12.537 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:12.537 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a 00:07:12.537 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a 00:07:12.538 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:12.538 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:12.538 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:12.538 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:12.538 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:12.795 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a -t 2000 00:07:13.053 [ 00:07:13.053 { 00:07:13.053 "name": "cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a", 00:07:13.053 "aliases": [ 00:07:13.053 "lvs/lvol" 00:07:13.053 ], 00:07:13.053 "product_name": "Logical Volume", 00:07:13.053 "block_size": 4096, 00:07:13.053 "num_blocks": 38912, 00:07:13.053 "uuid": "cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a", 00:07:13.053 "assigned_rate_limits": { 00:07:13.053 "rw_ios_per_sec": 0, 00:07:13.053 "rw_mbytes_per_sec": 0, 00:07:13.053 "r_mbytes_per_sec": 0, 00:07:13.053 "w_mbytes_per_sec": 0 00:07:13.053 }, 00:07:13.053 "claimed": false, 00:07:13.053 "zoned": false, 00:07:13.053 "supported_io_types": { 00:07:13.053 "read": true, 00:07:13.053 "write": true, 00:07:13.053 "unmap": true, 00:07:13.053 "flush": false, 00:07:13.053 "reset": true, 00:07:13.053 "nvme_admin": false, 00:07:13.053 "nvme_io": false, 00:07:13.053 "nvme_io_md": false, 00:07:13.053 "write_zeroes": true, 00:07:13.053 "zcopy": false, 00:07:13.053 "get_zone_info": false, 00:07:13.053 "zone_management": false, 00:07:13.053 "zone_append": false, 00:07:13.053 "compare": false, 00:07:13.053 "compare_and_write": false, 00:07:13.053 "abort": false, 00:07:13.053 "seek_hole": true, 00:07:13.053 "seek_data": true, 00:07:13.053 "copy": false, 00:07:13.053 "nvme_iov_md": false 00:07:13.053 }, 00:07:13.053 "driver_specific": { 00:07:13.053 "lvol": { 00:07:13.053 "lvol_store_uuid": "88d4fbc0-0eed-45ad-b51a-0f6f941b8f05", 00:07:13.053 "base_bdev": "aio_bdev", 00:07:13.053 "thin_provision": false, 00:07:13.053 "num_allocated_clusters": 38, 00:07:13.053 "snapshot": false, 00:07:13.053 "clone": false, 00:07:13.053 "esnap_clone": false 00:07:13.053 } 00:07:13.053 } 00:07:13.053 } 00:07:13.053 ] 00:07:13.053 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:13.053 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:13.053 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:13.053 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:13.053 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:13.053 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:13.311 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:13.311 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.569 [2024-11-26 20:30:27.942091] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:13.569 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:13.836 request: 00:07:13.836 { 00:07:13.836 "uuid": "88d4fbc0-0eed-45ad-b51a-0f6f941b8f05", 00:07:13.836 "method": "bdev_lvol_get_lvstores", 00:07:13.836 "req_id": 1 00:07:13.836 } 00:07:13.836 Got JSON-RPC error response 00:07:13.836 response: 00:07:13.836 { 00:07:13.836 "code": -19, 00:07:13.836 "message": "No such device" 00:07:13.836 } 00:07:13.836 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:13.836 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.836 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.836 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.836 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:14.093 aio_bdev 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:14.093 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a -t 2000 00:07:14.349 [ 00:07:14.349 { 00:07:14.349 "name": "cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a", 00:07:14.349 "aliases": [ 00:07:14.349 "lvs/lvol" 00:07:14.349 ], 00:07:14.349 "product_name": "Logical Volume", 00:07:14.349 "block_size": 4096, 00:07:14.349 "num_blocks": 38912, 00:07:14.349 "uuid": "cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a", 00:07:14.349 "assigned_rate_limits": { 00:07:14.349 "rw_ios_per_sec": 0, 00:07:14.349 "rw_mbytes_per_sec": 0, 00:07:14.349 "r_mbytes_per_sec": 0, 00:07:14.366 "w_mbytes_per_sec": 0 00:07:14.366 }, 00:07:14.366 "claimed": false, 00:07:14.366 "zoned": false, 00:07:14.366 "supported_io_types": { 00:07:14.366 "read": true, 00:07:14.366 "write": true, 00:07:14.366 "unmap": true, 00:07:14.366 "flush": false, 00:07:14.366 "reset": true, 00:07:14.366 "nvme_admin": false, 00:07:14.366 "nvme_io": false, 00:07:14.366 "nvme_io_md": false, 00:07:14.367 "write_zeroes": true, 00:07:14.367 "zcopy": false, 00:07:14.367 "get_zone_info": false, 00:07:14.367 "zone_management": false, 00:07:14.367 "zone_append": false, 00:07:14.367 "compare": false, 00:07:14.367 "compare_and_write": false, 00:07:14.367 "abort": false, 00:07:14.367 "seek_hole": true, 00:07:14.367 "seek_data": true, 00:07:14.367 "copy": false, 00:07:14.367 "nvme_iov_md": false 00:07:14.367 }, 00:07:14.367 "driver_specific": { 00:07:14.367 "lvol": { 00:07:14.367 "lvol_store_uuid": "88d4fbc0-0eed-45ad-b51a-0f6f941b8f05", 00:07:14.367 "base_bdev": "aio_bdev", 00:07:14.367 "thin_provision": false, 00:07:14.367 "num_allocated_clusters": 38, 00:07:14.367 "snapshot": false, 00:07:14.367 "clone": false, 00:07:14.367 "esnap_clone": false 00:07:14.367 } 00:07:14.367 } 00:07:14.367 } 00:07:14.367 ] 00:07:14.367 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:14.367 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:14.367 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:14.625 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:14.625 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:14.625 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:14.889 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:14.889 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cb6b4ef3-d1c5-4abb-a524-3fc1c749bc5a 00:07:15.145 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 88d4fbc0-0eed-45ad-b51a-0f6f941b8f05 00:07:15.402 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:15.402 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:15.979 ************************************ 00:07:15.979 END TEST lvs_grow_dirty 00:07:15.979 ************************************ 00:07:15.979 00:07:15.979 real 0m18.274s 00:07:15.979 user 0m40.264s 00:07:15.979 sys 0m5.636s 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:15.979 nvmf_trace.0 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.979 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:16.543 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.543 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:16.543 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.543 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.543 rmmod nvme_tcp 00:07:16.543 rmmod nvme_fabrics 00:07:16.543 rmmod nvme_keyring 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63033 ']' 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63033 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63033 ']' 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63033 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63033 00:07:16.543 killing process with pid 63033 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63033' 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63033 00:07:16.543 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63033 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:17.058 00:07:17.058 real 0m37.776s 00:07:17.058 user 1m1.333s 00:07:17.058 sys 0m8.654s 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.058 ************************************ 00:07:17.058 END TEST nvmf_lvs_grow 00:07:17.058 ************************************ 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.058 ************************************ 00:07:17.058 START TEST nvmf_bdev_io_wait 00:07:17.058 ************************************ 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:17.058 * Looking for test storage... 00:07:17.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.058 --rc genhtml_branch_coverage=1 00:07:17.058 --rc genhtml_function_coverage=1 00:07:17.058 --rc genhtml_legend=1 00:07:17.058 --rc geninfo_all_blocks=1 00:07:17.058 --rc geninfo_unexecuted_blocks=1 00:07:17.058 00:07:17.058 ' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.058 --rc genhtml_branch_coverage=1 00:07:17.058 --rc genhtml_function_coverage=1 00:07:17.058 --rc genhtml_legend=1 00:07:17.058 --rc geninfo_all_blocks=1 00:07:17.058 --rc geninfo_unexecuted_blocks=1 00:07:17.058 00:07:17.058 ' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.058 --rc genhtml_branch_coverage=1 00:07:17.058 --rc genhtml_function_coverage=1 00:07:17.058 --rc genhtml_legend=1 00:07:17.058 --rc geninfo_all_blocks=1 00:07:17.058 --rc geninfo_unexecuted_blocks=1 00:07:17.058 00:07:17.058 ' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.058 --rc genhtml_branch_coverage=1 00:07:17.058 --rc genhtml_function_coverage=1 00:07:17.058 --rc genhtml_legend=1 00:07:17.058 --rc geninfo_all_blocks=1 00:07:17.058 --rc geninfo_unexecuted_blocks=1 00:07:17.058 00:07:17.058 ' 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.058 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.059 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:17.059 Cannot find device "nvmf_init_br" 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:17.059 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:17.316 Cannot find device "nvmf_init_br2" 00:07:17.316 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:17.317 Cannot find device "nvmf_tgt_br" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:17.317 Cannot find device "nvmf_tgt_br2" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:17.317 Cannot find device "nvmf_init_br" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:17.317 Cannot find device "nvmf_init_br2" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:17.317 Cannot find device "nvmf_tgt_br" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:17.317 Cannot find device "nvmf_tgt_br2" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:17.317 Cannot find device "nvmf_br" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:17.317 Cannot find device "nvmf_init_if" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:17.317 Cannot find device "nvmf_init_if2" 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:17.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:17.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:17.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:17.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:17.317 00:07:17.317 --- 10.0.0.3 ping statistics --- 00:07:17.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.317 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:17.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:17.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:07:17.317 00:07:17.317 --- 10.0.0.4 ping statistics --- 00:07:17.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.317 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:17.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:17.317 00:07:17.317 --- 10.0.0.1 ping statistics --- 00:07:17.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.317 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:17.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:07:17.317 00:07:17.317 --- 10.0.0.2 ping statistics --- 00:07:17.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.317 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.317 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63396 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63396 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63396 ']' 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.576 [2024-11-26 20:30:31.905752] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:17.576 [2024-11-26 20:30:31.905818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.576 [2024-11-26 20:30:32.045331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.576 [2024-11-26 20:30:32.083281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.576 [2024-11-26 20:30:32.083324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.576 [2024-11-26 20:30:32.083331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.576 [2024-11-26 20:30:32.083336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.576 [2024-11-26 20:30:32.083341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.577 [2024-11-26 20:30:32.084042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.577 [2024-11-26 20:30:32.084100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.577 [2024-11-26 20:30:32.084295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.577 [2024-11-26 20:30:32.084541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 [2024-11-26 20:30:32.862661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 [2024-11-26 20:30:32.877677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 Malloc0 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 [2024-11-26 20:30:32.922630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63431 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63433 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63435 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63437 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.528 { 00:07:18.528 "params": { 00:07:18.528 "name": "Nvme$subsystem", 00:07:18.528 "trtype": "$TEST_TRANSPORT", 00:07:18.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.528 "adrfam": "ipv4", 00:07:18.528 "trsvcid": "$NVMF_PORT", 00:07:18.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.528 "hdgst": ${hdgst:-false}, 00:07:18.528 "ddgst": ${ddgst:-false} 00:07:18.528 }, 00:07:18.528 "method": "bdev_nvme_attach_controller" 00:07:18.528 } 00:07:18.528 EOF 00:07:18.528 )") 00:07:18.528 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.529 { 00:07:18.529 "params": { 00:07:18.529 "name": "Nvme$subsystem", 00:07:18.529 "trtype": "$TEST_TRANSPORT", 00:07:18.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.529 "adrfam": "ipv4", 00:07:18.529 "trsvcid": "$NVMF_PORT", 00:07:18.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.529 "hdgst": ${hdgst:-false}, 00:07:18.529 "ddgst": ${ddgst:-false} 00:07:18.529 }, 00:07:18.529 "method": "bdev_nvme_attach_controller" 00:07:18.529 } 00:07:18.529 EOF 00:07:18.529 )") 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.529 { 00:07:18.529 "params": { 00:07:18.529 "name": "Nvme$subsystem", 00:07:18.529 "trtype": "$TEST_TRANSPORT", 00:07:18.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.529 "adrfam": "ipv4", 00:07:18.529 "trsvcid": "$NVMF_PORT", 00:07:18.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.529 "hdgst": ${hdgst:-false}, 00:07:18.529 "ddgst": ${ddgst:-false} 00:07:18.529 }, 00:07:18.529 "method": "bdev_nvme_attach_controller" 00:07:18.529 } 00:07:18.529 EOF 00:07:18.529 )") 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.529 { 00:07:18.529 "params": { 00:07:18.529 "name": "Nvme$subsystem", 00:07:18.529 "trtype": "$TEST_TRANSPORT", 00:07:18.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.529 "adrfam": "ipv4", 00:07:18.529 "trsvcid": "$NVMF_PORT", 00:07:18.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.529 "hdgst": ${hdgst:-false}, 00:07:18.529 "ddgst": ${ddgst:-false} 00:07:18.529 }, 00:07:18.529 "method": "bdev_nvme_attach_controller" 00:07:18.529 } 00:07:18.529 EOF 00:07:18.529 )") 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.529 "params": { 00:07:18.529 "name": "Nvme1", 00:07:18.529 "trtype": "tcp", 00:07:18.529 "traddr": "10.0.0.3", 00:07:18.529 "adrfam": "ipv4", 00:07:18.529 "trsvcid": "4420", 00:07:18.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.529 "hdgst": false, 00:07:18.529 "ddgst": false 00:07:18.529 }, 00:07:18.529 "method": "bdev_nvme_attach_controller" 00:07:18.529 }' 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.529 "params": { 00:07:18.529 "name": "Nvme1", 00:07:18.529 "trtype": "tcp", 00:07:18.529 "traddr": "10.0.0.3", 00:07:18.529 "adrfam": "ipv4", 00:07:18.529 "trsvcid": "4420", 00:07:18.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.529 "hdgst": false, 00:07:18.529 "ddgst": false 00:07:18.529 }, 00:07:18.529 "method": "bdev_nvme_attach_controller" 00:07:18.529 }' 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.529 "params": { 00:07:18.529 "name": "Nvme1", 00:07:18.529 "trtype": "tcp", 00:07:18.529 "traddr": "10.0.0.3", 00:07:18.529 "adrfam": "ipv4", 00:07:18.529 "trsvcid": "4420", 00:07:18.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.529 "hdgst": false, 00:07:18.529 "ddgst": false 00:07:18.529 }, 00:07:18.529 "method": "bdev_nvme_attach_controller" 00:07:18.529 }' 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.529 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.529 "params": { 00:07:18.529 "name": "Nvme1", 00:07:18.529 "trtype": "tcp", 00:07:18.529 "traddr": "10.0.0.3", 00:07:18.529 "adrfam": "ipv4", 00:07:18.529 "trsvcid": "4420", 00:07:18.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.529 "hdgst": false, 00:07:18.529 "ddgst": false 00:07:18.529 }, 00:07:18.529 "method": "bdev_nvme_attach_controller" 00:07:18.529 }' 00:07:18.529 [2024-11-26 20:30:32.972517] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:18.529 [2024-11-26 20:30:32.972742] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:18.529 [2024-11-26 20:30:32.975743] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:18.529 [2024-11-26 20:30:32.975946] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:18.529 [2024-11-26 20:30:32.984180] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:18.529 [2024-11-26 20:30:32.985666] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:18.529 [2024-11-26 20:30:33.006624] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:18.529 [2024-11-26 20:30:33.006710] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:18.529 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63431 00:07:18.788 [2024-11-26 20:30:33.152035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.788 [2024-11-26 20:30:33.189320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.788 [2024-11-26 20:30:33.198865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.788 [2024-11-26 20:30:33.203083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.788 [2024-11-26 20:30:33.229005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:18.788 [2024-11-26 20:30:33.233908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.788 [2024-11-26 20:30:33.241790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.788 [2024-11-26 20:30:33.264210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:18.788 [2024-11-26 20:30:33.275898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.788 [2024-11-26 20:30:33.276696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.788 [2024-11-26 20:30:33.305499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:18.788 Running I/O for 1 seconds... 00:07:18.788 [2024-11-26 20:30:33.318036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.046 Running I/O for 1 seconds... 00:07:19.046 Running I/O for 1 seconds... 00:07:19.046 Running I/O for 1 seconds... 00:07:19.979 12257.00 IOPS, 47.88 MiB/s 00:07:19.979 Latency(us) 00:07:19.979 [2024-11-26T20:30:34.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.979 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:19.979 Nvme1n1 : 1.01 12311.79 48.09 0.00 0.00 10360.22 5973.86 19257.50 00:07:19.979 [2024-11-26T20:30:34.534Z] =================================================================================================================== 00:07:19.979 [2024-11-26T20:30:34.534Z] Total : 12311.79 48.09 0.00 0.00 10360.22 5973.86 19257.50 00:07:19.979 5475.00 IOPS, 21.39 MiB/s 00:07:19.979 Latency(us) 00:07:19.979 [2024-11-26T20:30:34.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.979 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:19.979 Nvme1n1 : 1.02 5534.46 21.62 0.00 0.00 22946.50 10687.41 38515.00 00:07:19.979 [2024-11-26T20:30:34.534Z] =================================================================================================================== 00:07:19.979 [2024-11-26T20:30:34.534Z] Total : 5534.46 21.62 0.00 0.00 22946.50 10687.41 38515.00 00:07:19.979 178224.00 IOPS, 696.19 MiB/s 00:07:19.979 Latency(us) 00:07:19.979 [2024-11-26T20:30:34.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.979 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:19.979 Nvme1n1 : 1.00 177887.61 694.87 0.00 0.00 715.82 337.13 1890.46 00:07:19.979 [2024-11-26T20:30:34.534Z] =================================================================================================================== 00:07:19.979 [2024-11-26T20:30:34.534Z] Total : 177887.61 694.87 0.00 0.00 715.82 337.13 1890.46 00:07:19.979 5417.00 IOPS, 21.16 MiB/s [2024-11-26T20:30:34.534Z] 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63433 00:07:19.979 00:07:19.979 Latency(us) 00:07:19.979 [2024-11-26T20:30:34.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.980 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:19.980 Nvme1n1 : 1.01 5509.81 21.52 0.00 0.00 23143.75 5847.83 48194.17 00:07:19.980 [2024-11-26T20:30:34.535Z] =================================================================================================================== 00:07:19.980 [2024-11-26T20:30:34.535Z] Total : 5509.81 21.52 0.00 0.00 23143.75 5847.83 48194.17 00:07:19.980 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63435 00:07:19.980 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63437 00:07:19.980 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.980 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.980 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.238 rmmod nvme_tcp 00:07:20.238 rmmod nvme_fabrics 00:07:20.238 rmmod nvme_keyring 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63396 ']' 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63396 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63396 ']' 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63396 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63396 00:07:20.238 killing process with pid 63396 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63396' 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63396 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63396 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:20.238 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:20.497 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.498 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.498 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:20.498 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.498 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.498 20:30:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:20.498 00:07:20.498 real 0m3.574s 00:07:20.498 user 0m15.357s 00:07:20.498 sys 0m1.567s 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.498 ************************************ 00:07:20.498 END TEST nvmf_bdev_io_wait 00:07:20.498 ************************************ 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.498 ************************************ 00:07:20.498 START TEST nvmf_queue_depth 00:07:20.498 ************************************ 00:07:20.498 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:20.756 * Looking for test storage... 00:07:20.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.756 --rc genhtml_branch_coverage=1 00:07:20.756 --rc genhtml_function_coverage=1 00:07:20.756 --rc genhtml_legend=1 00:07:20.756 --rc geninfo_all_blocks=1 00:07:20.756 --rc geninfo_unexecuted_blocks=1 00:07:20.756 00:07:20.756 ' 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.756 --rc genhtml_branch_coverage=1 00:07:20.756 --rc genhtml_function_coverage=1 00:07:20.756 --rc genhtml_legend=1 00:07:20.756 --rc geninfo_all_blocks=1 00:07:20.756 --rc geninfo_unexecuted_blocks=1 00:07:20.756 00:07:20.756 ' 00:07:20.756 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.756 --rc genhtml_branch_coverage=1 00:07:20.756 --rc genhtml_function_coverage=1 00:07:20.756 --rc genhtml_legend=1 00:07:20.756 --rc geninfo_all_blocks=1 00:07:20.756 --rc geninfo_unexecuted_blocks=1 00:07:20.756 00:07:20.757 ' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.757 --rc genhtml_branch_coverage=1 00:07:20.757 --rc genhtml_function_coverage=1 00:07:20.757 --rc genhtml_legend=1 00:07:20.757 --rc geninfo_all_blocks=1 00:07:20.757 --rc geninfo_unexecuted_blocks=1 00:07:20.757 00:07:20.757 ' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.757 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:20.757 Cannot find device "nvmf_init_br" 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:20.757 Cannot find device "nvmf_init_br2" 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:20.757 Cannot find device "nvmf_tgt_br" 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.757 Cannot find device "nvmf_tgt_br2" 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:20.757 Cannot find device "nvmf_init_br" 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:20.757 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:20.757 Cannot find device "nvmf_init_br2" 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:20.758 Cannot find device "nvmf_tgt_br" 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:20.758 Cannot find device "nvmf_tgt_br2" 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:20.758 Cannot find device "nvmf_br" 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:20.758 Cannot find device "nvmf_init_if" 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:20.758 Cannot find device "nvmf_init_if2" 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:20.758 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:21.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:21.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.016 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:21.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:21.017 00:07:21.017 --- 10.0.0.3 ping statistics --- 00:07:21.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.017 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:21.017 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:21.017 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:07:21.017 00:07:21.017 --- 10.0.0.4 ping statistics --- 00:07:21.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.017 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:07:21.017 00:07:21.017 --- 10.0.0.1 ping statistics --- 00:07:21.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.017 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:21.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:07:21.017 00:07:21.017 --- 10.0.0.2 ping statistics --- 00:07:21.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.017 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=63685 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 63685 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63685 ']' 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.017 20:30:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.017 [2024-11-26 20:30:35.555324] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:21.017 [2024-11-26 20:30:35.555380] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.275 [2024-11-26 20:30:35.694895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.275 [2024-11-26 20:30:35.735086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.275 [2024-11-26 20:30:35.735141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.275 [2024-11-26 20:30:35.735146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.275 [2024-11-26 20:30:35.735150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.275 [2024-11-26 20:30:35.735154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.275 [2024-11-26 20:30:35.735425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.275 [2024-11-26 20:30:35.775956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.210 [2024-11-26 20:30:36.471139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.210 Malloc0 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.210 [2024-11-26 20:30:36.514075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=63717 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 63717 /var/tmp/bdevperf.sock 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63717 ']' 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.210 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.210 [2024-11-26 20:30:36.558221] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:22.210 [2024-11-26 20:30:36.558293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63717 ] 00:07:22.210 [2024-11-26 20:30:36.698990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.210 [2024-11-26 20:30:36.736677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.469 [2024-11-26 20:30:36.769972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.033 20:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.033 20:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:23.033 20:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:23.033 20:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.033 20:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.033 NVMe0n1 00:07:23.033 20:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.033 20:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:23.290 Running I/O for 10 seconds... 00:07:25.154 7841.00 IOPS, 30.63 MiB/s [2024-11-26T20:30:40.641Z] 8025.50 IOPS, 31.35 MiB/s [2024-11-26T20:30:42.008Z] 8132.67 IOPS, 31.77 MiB/s [2024-11-26T20:30:42.940Z] 8240.50 IOPS, 32.19 MiB/s [2024-11-26T20:30:43.875Z] 8406.20 IOPS, 32.84 MiB/s [2024-11-26T20:30:44.808Z] 8473.83 IOPS, 33.10 MiB/s [2024-11-26T20:30:45.741Z] 8524.43 IOPS, 33.30 MiB/s [2024-11-26T20:30:46.675Z] 8730.38 IOPS, 34.10 MiB/s [2024-11-26T20:30:48.048Z] 8961.44 IOPS, 35.01 MiB/s [2024-11-26T20:30:48.048Z] 8882.20 IOPS, 34.70 MiB/s 00:07:33.493 Latency(us) 00:07:33.493 [2024-11-26T20:30:48.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.493 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:33.493 Verification LBA range: start 0x0 length 0x4000 00:07:33.493 NVMe0n1 : 10.08 8900.12 34.77 0.00 0.00 114490.58 21072.34 137928.07 00:07:33.493 [2024-11-26T20:30:48.048Z] =================================================================================================================== 00:07:33.493 [2024-11-26T20:30:48.048Z] Total : 8900.12 34.77 0.00 0.00 114490.58 21072.34 137928.07 00:07:33.493 { 00:07:33.493 "results": [ 00:07:33.493 { 00:07:33.493 "job": "NVMe0n1", 00:07:33.493 "core_mask": "0x1", 00:07:33.493 "workload": "verify", 00:07:33.493 "status": "finished", 00:07:33.493 "verify_range": { 00:07:33.493 "start": 0, 00:07:33.493 "length": 16384 00:07:33.493 }, 00:07:33.493 "queue_depth": 1024, 00:07:33.493 "io_size": 4096, 00:07:33.493 "runtime": 10.080655, 00:07:33.493 "iops": 8900.116113486672, 00:07:33.493 "mibps": 34.766078568307314, 00:07:33.493 "io_failed": 0, 00:07:33.493 "io_timeout": 0, 00:07:33.493 "avg_latency_us": 114490.58237111254, 00:07:33.493 "min_latency_us": 21072.344615384616, 00:07:33.493 "max_latency_us": 137928.07384615386 00:07:33.493 } 00:07:33.493 ], 00:07:33.493 "core_count": 1 00:07:33.493 } 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 63717 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63717 ']' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63717 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63717 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.493 killing process with pid 63717 00:07:33.493 Received shutdown signal, test time was about 10.000000 seconds 00:07:33.493 00:07:33.493 Latency(us) 00:07:33.493 [2024-11-26T20:30:48.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.493 [2024-11-26T20:30:48.048Z] =================================================================================================================== 00:07:33.493 [2024-11-26T20:30:48.048Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63717' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63717 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63717 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.493 rmmod nvme_tcp 00:07:33.493 rmmod nvme_fabrics 00:07:33.493 rmmod nvme_keyring 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 63685 ']' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 63685 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63685 ']' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63685 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63685 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:33.493 killing process with pid 63685 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63685' 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63685 00:07:33.493 20:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63685 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:33.751 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:07:34.010 00:07:34.010 real 0m13.332s 00:07:34.010 user 0m23.032s 00:07:34.010 sys 0m1.869s 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.010 ************************************ 00:07:34.010 END TEST nvmf_queue_depth 00:07:34.010 ************************************ 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.010 ************************************ 00:07:34.010 START TEST nvmf_target_multipath 00:07:34.010 ************************************ 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:34.010 * Looking for test storage... 00:07:34.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.010 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.010 --rc genhtml_branch_coverage=1 00:07:34.010 --rc genhtml_function_coverage=1 00:07:34.010 --rc genhtml_legend=1 00:07:34.010 --rc geninfo_all_blocks=1 00:07:34.010 --rc geninfo_unexecuted_blocks=1 00:07:34.011 00:07:34.011 ' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.011 --rc genhtml_branch_coverage=1 00:07:34.011 --rc genhtml_function_coverage=1 00:07:34.011 --rc genhtml_legend=1 00:07:34.011 --rc geninfo_all_blocks=1 00:07:34.011 --rc geninfo_unexecuted_blocks=1 00:07:34.011 00:07:34.011 ' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.011 --rc genhtml_branch_coverage=1 00:07:34.011 --rc genhtml_function_coverage=1 00:07:34.011 --rc genhtml_legend=1 00:07:34.011 --rc geninfo_all_blocks=1 00:07:34.011 --rc geninfo_unexecuted_blocks=1 00:07:34.011 00:07:34.011 ' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.011 --rc genhtml_branch_coverage=1 00:07:34.011 --rc genhtml_function_coverage=1 00:07:34.011 --rc genhtml_legend=1 00:07:34.011 --rc geninfo_all_blocks=1 00:07:34.011 --rc geninfo_unexecuted_blocks=1 00:07:34.011 00:07:34.011 ' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.011 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:34.269 Cannot find device "nvmf_init_br" 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:34.269 Cannot find device "nvmf_init_br2" 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:34.269 Cannot find device "nvmf_tgt_br" 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:34.269 Cannot find device "nvmf_tgt_br2" 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:34.269 Cannot find device "nvmf_init_br" 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:34.269 Cannot find device "nvmf_init_br2" 00:07:34.269 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:34.270 Cannot find device "nvmf_tgt_br" 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:34.270 Cannot find device "nvmf_tgt_br2" 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:34.270 Cannot find device "nvmf_br" 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:34.270 Cannot find device "nvmf_init_if" 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:34.270 Cannot find device "nvmf_init_if2" 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.270 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:34.528 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.528 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:07:34.528 00:07:34.528 --- 10.0.0.3 ping statistics --- 00:07:34.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.528 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:34.528 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:34.528 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:07:34.528 00:07:34.528 --- 10.0.0.4 ping statistics --- 00:07:34.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.528 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:34.528 00:07:34.528 --- 10.0.0.1 ping statistics --- 00:07:34.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.528 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:34.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:07:34.528 00:07:34.528 --- 10.0.0.2 ping statistics --- 00:07:34.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.528 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64093 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64093 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64093 ']' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.528 20:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:34.528 [2024-11-26 20:30:48.935206] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:07:34.528 [2024-11-26 20:30:48.935298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.785 [2024-11-26 20:30:49.085363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.785 [2024-11-26 20:30:49.124947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.785 [2024-11-26 20:30:49.125123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.785 [2024-11-26 20:30:49.125187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.785 [2024-11-26 20:30:49.125214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.785 [2024-11-26 20:30:49.125229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.785 [2024-11-26 20:30:49.126002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.785 [2024-11-26 20:30:49.126106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.785 [2024-11-26 20:30:49.126668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.785 [2024-11-26 20:30:49.126482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.785 [2024-11-26 20:30:49.159070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.391 20:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.391 20:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:07:35.391 20:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.391 20:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.391 20:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:35.391 20:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.391 20:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.667 [2024-11-26 20:30:50.019873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.667 20:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:07:35.667 Malloc0 00:07:35.924 20:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:07:35.924 20:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:36.491 20:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:36.491 [2024-11-26 20:30:50.933325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:36.491 20:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:07:36.748 [2024-11-26 20:30:51.117499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:07:36.748 20:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:07:36.749 20:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:07:37.006 20:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.006 20:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:07:37.006 20:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.006 20:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:07:37.006 20:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:38.904 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:07:38.905 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64183 00:07:38.905 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:38.905 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:07:38.905 [global] 00:07:38.905 thread=1 00:07:38.905 invalidate=1 00:07:38.905 rw=randrw 00:07:38.905 time_based=1 00:07:38.905 runtime=6 00:07:38.905 ioengine=libaio 00:07:38.905 direct=1 00:07:38.905 bs=4096 00:07:38.905 iodepth=128 00:07:38.905 norandommap=0 00:07:38.905 numjobs=1 00:07:38.905 00:07:38.905 verify_dump=1 00:07:38.905 verify_backlog=512 00:07:38.905 verify_state_save=0 00:07:38.905 do_verify=1 00:07:38.905 verify=crc32c-intel 00:07:38.905 [job0] 00:07:38.905 filename=/dev/nvme0n1 00:07:38.905 Could not set queue depth (nvme0n1) 00:07:39.162 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:39.162 fio-3.35 00:07:39.162 Starting 1 thread 00:07:40.095 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:40.095 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:40.353 20:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:40.611 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:41.178 20:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64183 00:07:45.364 00:07:45.364 job0: (groupid=0, jobs=1): err= 0: pid=64204: Tue Nov 26 20:30:59 2024 00:07:45.364 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(281MiB/6006msec) 00:07:45.364 slat (usec): min=3, max=7217, avg=50.95, stdev=215.00 00:07:45.364 clat (usec): min=1593, max=18761, avg=7321.33, stdev=1465.62 00:07:45.364 lat (usec): min=1599, max=18769, avg=7372.28, stdev=1470.02 00:07:45.364 clat percentiles (usec): 00:07:45.364 | 1.00th=[ 3785], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6521], 00:07:45.364 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7308], 00:07:45.364 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[10683], 00:07:45.364 | 99.00th=[11863], 99.50th=[12518], 99.90th=[15926], 99.95th=[16712], 00:07:45.364 | 99.99th=[18744] 00:07:45.364 bw ( KiB/s): min=10928, max=33320, per=52.20%, avg=24977.33, stdev=6601.83, samples=12 00:07:45.364 iops : min= 2732, max= 8330, avg=6244.33, stdev=1650.46, samples=12 00:07:45.364 write: IOPS=6994, BW=27.3MiB/s (28.7MB/s)(147MiB/5365msec); 0 zone resets 00:07:45.364 slat (usec): min=7, max=2149, avg=56.50, stdev=156.52 00:07:45.364 clat (usec): min=1159, max=13876, avg=6292.03, stdev=1251.59 00:07:45.364 lat (usec): min=1178, max=13892, avg=6348.53, stdev=1256.35 00:07:45.364 clat percentiles (usec): 00:07:45.364 | 1.00th=[ 2769], 5.00th=[ 3720], 10.00th=[ 4621], 20.00th=[ 5604], 00:07:45.364 | 30.00th=[ 5997], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6652], 00:07:45.364 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7635], 00:07:45.364 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12125], 99.95th=[12256], 00:07:45.364 | 99.99th=[12649] 00:07:45.364 bw ( KiB/s): min=11200, max=32664, per=89.28%, avg=24980.00, stdev=6354.07, samples=12 00:07:45.364 iops : min= 2800, max= 8166, avg=6245.00, stdev=1588.52, samples=12 00:07:45.364 lat (msec) : 2=0.04%, 4=3.18%, 10=91.89%, 20=4.89% 00:07:45.364 cpu : usr=3.08%, sys=16.23%, ctx=6292, majf=0, minf=102 00:07:45.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:07:45.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:45.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:45.364 issued rwts: total=71840,37527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:45.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:45.364 00:07:45.364 Run status group 0 (all jobs): 00:07:45.364 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=281MiB (294MB), run=6006-6006msec 00:07:45.364 WRITE: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=147MiB (154MB), run=5365-5365msec 00:07:45.364 00:07:45.364 Disk stats (read/write): 00:07:45.364 nvme0n1: ios=70878/36773, merge=0/0, ticks=502144/219574, in_queue=721718, util=98.50% 00:07:45.364 20:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:07:45.364 20:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64285 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:07:45.622 20:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:45.622 [global] 00:07:45.622 thread=1 00:07:45.622 invalidate=1 00:07:45.622 rw=randrw 00:07:45.622 time_based=1 00:07:45.622 runtime=6 00:07:45.622 ioengine=libaio 00:07:45.622 direct=1 00:07:45.622 bs=4096 00:07:45.622 iodepth=128 00:07:45.622 norandommap=0 00:07:45.622 numjobs=1 00:07:45.622 00:07:45.622 verify_dump=1 00:07:45.622 verify_backlog=512 00:07:45.622 verify_state_save=0 00:07:45.622 do_verify=1 00:07:45.622 verify=crc32c-intel 00:07:45.622 [job0] 00:07:45.622 filename=/dev/nvme0n1 00:07:45.622 Could not set queue depth (nvme0n1) 00:07:45.880 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:45.880 fio-3.35 00:07:45.880 Starting 1 thread 00:07:46.813 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:46.813 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:47.072 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:47.329 20:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:47.588 20:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64285 00:07:52.846 00:07:52.846 job0: (groupid=0, jobs=1): err= 0: pid=64312: Tue Nov 26 20:31:06 2024 00:07:52.846 read: IOPS=13.6k, BW=53.0MiB/s (55.5MB/s)(318MiB/6006msec) 00:07:52.846 slat (usec): min=2, max=8888, avg=40.10, stdev=191.82 00:07:52.846 clat (usec): min=173, max=20239, avg=6541.18, stdev=2471.61 00:07:52.846 lat (usec): min=181, max=20251, avg=6581.28, stdev=2483.15 00:07:52.846 clat percentiles (usec): 00:07:52.846 | 1.00th=[ 510], 5.00th=[ 963], 10.00th=[ 1958], 20.00th=[ 5669], 00:07:52.846 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7242], 00:07:52.846 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[10421], 00:07:52.846 | 99.00th=[11863], 99.50th=[12387], 99.90th=[18744], 99.95th=[19006], 00:07:52.846 | 99.99th=[20317] 00:07:52.846 bw ( KiB/s): min=11808, max=51488, per=50.96%, avg=27643.33, stdev=12113.13, samples=12 00:07:52.846 iops : min= 2952, max=12872, avg=6910.83, stdev=3028.28, samples=12 00:07:52.846 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(163MiB/5114msec); 0 zone resets 00:07:52.846 slat (usec): min=6, max=2386, avg=45.33, stdev=127.80 00:07:52.846 clat (usec): min=135, max=18889, avg=5515.51, stdev=2275.61 00:07:52.846 lat (usec): min=156, max=19558, avg=5560.83, stdev=2287.12 00:07:52.846 clat percentiles (usec): 00:07:52.846 | 1.00th=[ 429], 5.00th=[ 717], 10.00th=[ 1287], 20.00th=[ 3556], 00:07:52.846 | 30.00th=[ 5276], 40.00th=[ 6063], 50.00th=[ 6390], 60.00th=[ 6587], 00:07:52.846 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7701], 00:07:52.846 | 99.00th=[10290], 99.50th=[10945], 99.90th=[12780], 99.95th=[13173], 00:07:52.846 | 99.99th=[15401] 00:07:52.846 bw ( KiB/s): min=12288, max=50608, per=85.09%, avg=27719.33, stdev=11854.28, samples=12 00:07:52.846 iops : min= 3072, max=12652, avg=6929.83, stdev=2963.57, samples=12 00:07:52.846 lat (usec) : 250=0.12%, 500=1.07%, 750=2.55%, 1000=2.56% 00:07:52.846 lat (msec) : 2=5.07%, 4=5.66%, 10=78.61%, 20=4.35%, 50=0.01% 00:07:52.846 cpu : usr=4.16%, sys=18.47%, ctx=9245, majf=0, minf=139 00:07:52.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:07:52.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:52.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:52.846 issued rwts: total=81448,41648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:52.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:52.846 00:07:52.846 Run status group 0 (all jobs): 00:07:52.846 READ: bw=53.0MiB/s (55.5MB/s), 53.0MiB/s-53.0MiB/s (55.5MB/s-55.5MB/s), io=318MiB (334MB), run=6006-6006msec 00:07:52.846 WRITE: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=163MiB (171MB), run=5114-5114msec 00:07:52.846 00:07:52.846 Disk stats (read/write): 00:07:52.846 nvme0n1: ios=80584/40915, merge=0/0, ticks=506298/212322, in_queue=718620, util=98.47% 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.846 20:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:00.951 rmmod nvme_tcp 00:08:00.951 rmmod nvme_fabrics 00:08:00.951 rmmod nvme_keyring 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64093 ']' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64093 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64093 ']' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64093 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64093 00:08:00.951 killing process with pid 64093 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64093' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64093 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64093 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:00.951 ************************************ 00:08:00.951 END TEST nvmf_target_multipath 00:08:00.951 ************************************ 00:08:00.951 00:08:00.951 real 0m27.031s 00:08:00.951 user 1m43.662s 00:08:00.951 sys 0m7.323s 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.951 ************************************ 00:08:00.951 START TEST nvmf_zcopy 00:08:00.951 ************************************ 00:08:00.951 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:01.211 * Looking for test storage... 00:08:01.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.211 --rc genhtml_branch_coverage=1 00:08:01.211 --rc genhtml_function_coverage=1 00:08:01.211 --rc genhtml_legend=1 00:08:01.211 --rc geninfo_all_blocks=1 00:08:01.211 --rc geninfo_unexecuted_blocks=1 00:08:01.211 00:08:01.211 ' 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.211 --rc genhtml_branch_coverage=1 00:08:01.211 --rc genhtml_function_coverage=1 00:08:01.211 --rc genhtml_legend=1 00:08:01.211 --rc geninfo_all_blocks=1 00:08:01.211 --rc geninfo_unexecuted_blocks=1 00:08:01.211 00:08:01.211 ' 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.211 --rc genhtml_branch_coverage=1 00:08:01.211 --rc genhtml_function_coverage=1 00:08:01.211 --rc genhtml_legend=1 00:08:01.211 --rc geninfo_all_blocks=1 00:08:01.211 --rc geninfo_unexecuted_blocks=1 00:08:01.211 00:08:01.211 ' 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.211 --rc genhtml_branch_coverage=1 00:08:01.211 --rc genhtml_function_coverage=1 00:08:01.211 --rc genhtml_legend=1 00:08:01.211 --rc geninfo_all_blocks=1 00:08:01.211 --rc geninfo_unexecuted_blocks=1 00:08:01.211 00:08:01.211 ' 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.211 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:01.212 Cannot find device "nvmf_init_br" 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:01.212 Cannot find device "nvmf_init_br2" 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:01.212 Cannot find device "nvmf_tgt_br" 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:01.212 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:01.213 Cannot find device "nvmf_tgt_br2" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:01.213 Cannot find device "nvmf_init_br" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:01.213 Cannot find device "nvmf_init_br2" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:01.213 Cannot find device "nvmf_tgt_br" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:01.213 Cannot find device "nvmf_tgt_br2" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:01.213 Cannot find device "nvmf_br" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:01.213 Cannot find device "nvmf_init_if" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:01.213 Cannot find device "nvmf_init_if2" 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:01.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:01.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:01.213 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:01.472 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:01.472 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:01.472 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:01.472 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:01.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:01.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:01.473 00:08:01.473 --- 10.0.0.3 ping statistics --- 00:08:01.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.473 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:01.473 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:01.473 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:01.473 00:08:01.473 --- 10.0.0.4 ping statistics --- 00:08:01.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.473 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:01.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:08:01.473 00:08:01.473 --- 10.0.0.1 ping statistics --- 00:08:01.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.473 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:01.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:08:01.473 00:08:01.473 --- 10.0.0.2 ping statistics --- 00:08:01.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.473 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=64724 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 64724 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 64724 ']' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:01.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.473 20:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:01.473 [2024-11-26 20:31:15.964961] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:08:01.473 [2024-11-26 20:31:15.965022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.732 [2024-11-26 20:31:16.104665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.732 [2024-11-26 20:31:16.141842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.732 [2024-11-26 20:31:16.141881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.732 [2024-11-26 20:31:16.141887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.732 [2024-11-26 20:31:16.141892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.732 [2024-11-26 20:31:16.141897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.732 [2024-11-26 20:31:16.142174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.732 [2024-11-26 20:31:16.173665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 [2024-11-26 20:31:16.880649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 [2024-11-26 20:31:16.896707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 malloc0 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.415 { 00:08:02.415 "params": { 00:08:02.415 "name": "Nvme$subsystem", 00:08:02.415 "trtype": "$TEST_TRANSPORT", 00:08:02.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.415 "adrfam": "ipv4", 00:08:02.415 "trsvcid": "$NVMF_PORT", 00:08:02.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.415 "hdgst": ${hdgst:-false}, 00:08:02.415 "ddgst": ${ddgst:-false} 00:08:02.415 }, 00:08:02.415 "method": "bdev_nvme_attach_controller" 00:08:02.415 } 00:08:02.415 EOF 00:08:02.415 )") 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:02.415 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.415 "params": { 00:08:02.415 "name": "Nvme1", 00:08:02.415 "trtype": "tcp", 00:08:02.415 "traddr": "10.0.0.3", 00:08:02.415 "adrfam": "ipv4", 00:08:02.415 "trsvcid": "4420", 00:08:02.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.415 "hdgst": false, 00:08:02.415 "ddgst": false 00:08:02.415 }, 00:08:02.415 "method": "bdev_nvme_attach_controller" 00:08:02.415 }' 00:08:02.415 [2024-11-26 20:31:16.965513] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:08:02.415 [2024-11-26 20:31:16.965583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64757 ] 00:08:02.673 [2024-11-26 20:31:17.099016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.673 [2024-11-26 20:31:17.134482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.673 [2024-11-26 20:31:17.173751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.932 Running I/O for 10 seconds... 00:08:04.824 6093.00 IOPS, 47.60 MiB/s [2024-11-26T20:31:20.311Z] 6304.00 IOPS, 49.25 MiB/s [2024-11-26T20:31:21.682Z] 6422.33 IOPS, 50.17 MiB/s [2024-11-26T20:31:22.616Z] 6475.75 IOPS, 50.59 MiB/s [2024-11-26T20:31:23.550Z] 6547.60 IOPS, 51.15 MiB/s [2024-11-26T20:31:24.483Z] 6588.67 IOPS, 51.47 MiB/s [2024-11-26T20:31:25.418Z] 6620.00 IOPS, 51.72 MiB/s [2024-11-26T20:31:26.436Z] 6692.50 IOPS, 52.29 MiB/s [2024-11-26T20:31:27.372Z] 6885.89 IOPS, 53.80 MiB/s [2024-11-26T20:31:27.372Z] 7046.00 IOPS, 55.05 MiB/s 00:08:12.817 Latency(us) 00:08:12.817 [2024-11-26T20:31:27.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.817 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:12.817 Verification LBA range: start 0x0 length 0x1000 00:08:12.817 Nvme1n1 : 10.01 7048.94 55.07 0.00 0.00 18109.48 831.80 29239.14 00:08:12.817 [2024-11-26T20:31:27.372Z] =================================================================================================================== 00:08:12.817 [2024-11-26T20:31:27.372Z] Total : 7048.94 55.07 0.00 0.00 18109.48 831.80 29239.14 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=64873 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.076 { 00:08:13.076 "params": { 00:08:13.076 "name": "Nvme$subsystem", 00:08:13.076 "trtype": "$TEST_TRANSPORT", 00:08:13.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.076 "adrfam": "ipv4", 00:08:13.076 "trsvcid": "$NVMF_PORT", 00:08:13.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.076 "hdgst": ${hdgst:-false}, 00:08:13.076 "ddgst": ${ddgst:-false} 00:08:13.076 }, 00:08:13.076 "method": "bdev_nvme_attach_controller" 00:08:13.076 } 00:08:13.076 EOF 00:08:13.076 )") 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:13.076 [2024-11-26 20:31:27.409830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.076 [2024-11-26 20:31:27.409869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.076 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:13.077 20:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.077 "params": { 00:08:13.077 "name": "Nvme1", 00:08:13.077 "trtype": "tcp", 00:08:13.077 "traddr": "10.0.0.3", 00:08:13.077 "adrfam": "ipv4", 00:08:13.077 "trsvcid": "4420", 00:08:13.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.077 "hdgst": false, 00:08:13.077 "ddgst": false 00:08:13.077 }, 00:08:13.077 "method": "bdev_nvme_attach_controller" 00:08:13.077 }' 00:08:13.077 [2024-11-26 20:31:27.417845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.417880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.425834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.425865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.433827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.433856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.437327] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:08:13.077 [2024-11-26 20:31:27.437383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64873 ] 00:08:13.077 [2024-11-26 20:31:27.445815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.445838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.453806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.453824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.461823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.461849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.473835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.473870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.481817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.481838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.489830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.489852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.497822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.497839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.505821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.505838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.513834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.513854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.521824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.521842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.529826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.529844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.537827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.537843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.545830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.545847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.553839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.553855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.561834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.561848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.569837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.569852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.574469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.077 [2024-11-26 20:31:27.577850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.577869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.585840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.585857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.593845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.593863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.601846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.601863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.610880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.077 [2024-11-26 20:31:27.613846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.613861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.621847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.621862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.077 [2024-11-26 20:31:27.629850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.077 [2024-11-26 20:31:27.629865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.637853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.637868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.645854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.645872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.652269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.339 [2024-11-26 20:31:27.653857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.653874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.661872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.661893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.669874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.669893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.677861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.677877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.685867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.685886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.693888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.693911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.701896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.701932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.709901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.709937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.717912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.717938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.725901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.725932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.733900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.733941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.741896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.741912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.749936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.749959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.757909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.757941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 Running I/O for 5 seconds... 00:08:13.339 [2024-11-26 20:31:27.772808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.772828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.782151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.782173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.797629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.797663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.808225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.808245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.817076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.817095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.825608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.825626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.339 [2024-11-26 20:31:27.834056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.339 [2024-11-26 20:31:27.834074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.340 [2024-11-26 20:31:27.840853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.340 [2024-11-26 20:31:27.840875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.340 [2024-11-26 20:31:27.851923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.340 [2024-11-26 20:31:27.851945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.340 [2024-11-26 20:31:27.859995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.340 [2024-11-26 20:31:27.860018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.340 [2024-11-26 20:31:27.870055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.340 [2024-11-26 20:31:27.870078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.340 [2024-11-26 20:31:27.877499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.340 [2024-11-26 20:31:27.877518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.340 [2024-11-26 20:31:27.888411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.340 [2024-11-26 20:31:27.888433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.896931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.896953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.906973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.906995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.915944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.915964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.924701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.924721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.933531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.933550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.940929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.940952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.952023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.952044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.959693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.959713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.969896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.969918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.977010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.977030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.987972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.987992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:27.995780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:27.995802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.006422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.006444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.015339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.015361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.023958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.023978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.032478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.032497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.041685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.041706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.050174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.050192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.059224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.059243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.068389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.068407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.077560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.077580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.086825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.086844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.095413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.095431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.103924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.103942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.113188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.113208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.119972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.119991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.130953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.130971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.139618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.139636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.601 [2024-11-26 20:31:28.148189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.601 [2024-11-26 20:31:28.148210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.862 [2024-11-26 20:31:28.157302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.862 [2024-11-26 20:31:28.157323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.171834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.171855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.180094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.180114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.186831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.186850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.197739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.197757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.206943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.206961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.215576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.215602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.224272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.224291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.232869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.232887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.241345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.241364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.248017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.248036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.258985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.259004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.267733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.267751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.282174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.282196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.290626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.290645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.299885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.299903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.309067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.309085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.318333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.318351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.326761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.326779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.335421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.335439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.343988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.344006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.352547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.352565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.361176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.361196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.369820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.369838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.378304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.378322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.392716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.392737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.406736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.406758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.863 [2024-11-26 20:31:28.415905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.863 [2024-11-26 20:31:28.415924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.424475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.424494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.432983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.433004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.441597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.441614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.450012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.450030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.458548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.458566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.467144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.467162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.473834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.473852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.484533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.484555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.493170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.493192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.502390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.502413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.511335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.511358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.520266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.520289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.528727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.528750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.537188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.537211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.546294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.546317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.555387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.555409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.564483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.564506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.572874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.572894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.582225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.125 [2024-11-26 20:31:28.582244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.125 [2024-11-26 20:31:28.590838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.590857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.599409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.599430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.608248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.608268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.617328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.617348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.626215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.626234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.635138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.635156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.643556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.643576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.652553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.652573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.660888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.660908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.126 [2024-11-26 20:31:28.675027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.126 [2024-11-26 20:31:28.675046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.683968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.683994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.692476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.692503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.699202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.699225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.710304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.710328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.719104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.719125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.727645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.727666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.736320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.736342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.744995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.745017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.754101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.754122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 16290.00 IOPS, 127.27 MiB/s [2024-11-26T20:31:28.942Z] [2024-11-26 20:31:28.763158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.763178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.771538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.771558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.778182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.778206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.789024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.789048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.797829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.797852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.806912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.806933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.815974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.815995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.824346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.824369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.833456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.833476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.841896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.841917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.850845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.850866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.859255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.859275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.868284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.868305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.876727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.876750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.885770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.885791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.894596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.894616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.903853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.903872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.912721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.912742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.387 [2024-11-26 20:31:28.921734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.387 [2024-11-26 20:31:28.921755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.388 [2024-11-26 20:31:28.930647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.388 [2024-11-26 20:31:28.930668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.388 [2024-11-26 20:31:28.939088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.388 [2024-11-26 20:31:28.939108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:28.947542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:28.947562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:28.956706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:28.956724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:28.965825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:28.965846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:28.974971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:28.974991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:28.984052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:28.984072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:28.992904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:28.992923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:29.001229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:29.001247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:29.009647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:29.009665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:29.018720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:29.018739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:29.027018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:29.027036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:29.035835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:29.035854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:29.044133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.707 [2024-11-26 20:31:29.044152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.707 [2024-11-26 20:31:29.053002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.053022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.062075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.062094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.070992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.071012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.080092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.080112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.089242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.089262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.098336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.098356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.104912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.104932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.115785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.115807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.124232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.124251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.132762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.132781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.141647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.141665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.150581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.150609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.157177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.157196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.168623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.168642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.177513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.177534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.186622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.186642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.195637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.195656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.204007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.204026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.213022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.213045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.222098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.222117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.230597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.230617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.708 [2024-11-26 20:31:29.239023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.708 [2024-11-26 20:31:29.239045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.247402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.247421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.255950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.255968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.264966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.264986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.274119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.274139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.282453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.282473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.290910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.290929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.299923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.299941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.308975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.308997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.317424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.317445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.326381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.326401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.334730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.334750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.341331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.341351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.351976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.351996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.360364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.360384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.369468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.369490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.377899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.377921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.386484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.386504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.395102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.395121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.404163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.404182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.412538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.412558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.420997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.421018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.430002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.430021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.438461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.438480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.446824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.446843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.456003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.456024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.465093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.465113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.474175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.474195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.483316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.483337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.492378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.492397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.501502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.501522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.971 [2024-11-26 20:31:29.510613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.971 [2024-11-26 20:31:29.510632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.972 [2024-11-26 20:31:29.519009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.972 [2024-11-26 20:31:29.519030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.528168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.528187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.536536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.536556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.545478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.545504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.559854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.559878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.573646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.573674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.581038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.581060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.591010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.591031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.599943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.230 [2024-11-26 20:31:29.599963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.230 [2024-11-26 20:31:29.608435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.608457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.617046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.617066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.623773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.623792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.634684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.634704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.643247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.643268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.652330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.652351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.666652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.666673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.675323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.675342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.684063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.684082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.692487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.692506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.700861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.700880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.709316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.709335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.717639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.717657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.726596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.726614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.735620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.735638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.744099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.744120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.753428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.753449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 16614.50 IOPS, 129.80 MiB/s [2024-11-26T20:31:29.786Z] [2024-11-26 20:31:29.762824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.762846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.231 [2024-11-26 20:31:29.777731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.231 [2024-11-26 20:31:29.777753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.792882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.792907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.801458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.801478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.809826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.809846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.816453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.816473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.827547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.827567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.836686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.836705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.845552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.845572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.854845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.854868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.863624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.863650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.872025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.872046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.881143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.881165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.890165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.491 [2024-11-26 20:31:29.890186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.491 [2024-11-26 20:31:29.899351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.899371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.907745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.907764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.916772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.916792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.925233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.925252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.934293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.934312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.943181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.943200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.951535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.951555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.960596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.960614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.969669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.969688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.978744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.978762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.988099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.988122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:29.996627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:29.996650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:30.005749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:30.005770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:30.014241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:30.014261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:30.022810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:30.022830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:30.031389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:30.031409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.492 [2024-11-26 20:31:30.039960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.492 [2024-11-26 20:31:30.039979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.049145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.049166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.058273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.058294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.067428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.067450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.076579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.076611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.085105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.085124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.093570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.093602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.102698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.102720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.111806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.111829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.120850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.120871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.129901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.129922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.138332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.138353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.147500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.147523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.156068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.156089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.164545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.164566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.172975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.172994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.181436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.181456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.190439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.190457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.199484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.199505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.208000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.208021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.217065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.217088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.226092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.226113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.234492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.234512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.243499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.243521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.252695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.252719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.261890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.261911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.270712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.270733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.751 [2024-11-26 20:31:30.279719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.751 [2024-11-26 20:31:30.279738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.752 [2024-11-26 20:31:30.288827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.752 [2024-11-26 20:31:30.288845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.752 [2024-11-26 20:31:30.297994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.752 [2024-11-26 20:31:30.298016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.307510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.307532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.316600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.316619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.324946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.324965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.333338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.333359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.347975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.348007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.355676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.355702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.364517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.364543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.373641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.373666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.382756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.382781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.391934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.391963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.401077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.401105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.410414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.410439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.419625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.419649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.426204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.426227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.436859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.436883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.011 [2024-11-26 20:31:30.445666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.011 [2024-11-26 20:31:30.445688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.454928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.454954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.463401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.463427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.472059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.472084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.478754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.478776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.489563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.489586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.498513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.498537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.507208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.507232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.516346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.516369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.525456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.525479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.534047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.534071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.543275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.543299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.552347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.552369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.012 [2024-11-26 20:31:30.561444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.012 [2024-11-26 20:31:30.561469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.569980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.570004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.578411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.578439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.586801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.586826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.595245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.595272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.603987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.604016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.610770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.610795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.621633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.621664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.630664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.630691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.639119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.639147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.647484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.647509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.655897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.655922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.665091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.665125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.673521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.673552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.681966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.681995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.690370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.690399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.699488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.699513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.708366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.708393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.716842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.716868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.725306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.725332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.733721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.733747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.742753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.742777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.271 [2024-11-26 20:31:30.751162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.271 [2024-11-26 20:31:30.751186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 [2024-11-26 20:31:30.759564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.759585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 16679.67 IOPS, 130.31 MiB/s [2024-11-26T20:31:30.827Z] [2024-11-26 20:31:30.768664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.768688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 [2024-11-26 20:31:30.777603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.777626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 [2024-11-26 20:31:30.786124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.786147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 [2024-11-26 20:31:30.794563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.794595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 [2024-11-26 20:31:30.803094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.803119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 [2024-11-26 20:31:30.812390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.812417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.272 [2024-11-26 20:31:30.821053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.272 [2024-11-26 20:31:30.821080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.830207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.830234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.839338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.839363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.848478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.848505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.856963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.856991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.865361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.865388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.874568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.874602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.882952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.882976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.891946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.891970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.900852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.900877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.910021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.910046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.919088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.919115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.927523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.927551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.936650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.936675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.945554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.945579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.954461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.954485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.963563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.963598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.972744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.972772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.981233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.981261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.989658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.989683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:30.998084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:30.998108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.006345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.006368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.014760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.014785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.024006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.024034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.033099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.033126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.041514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.041538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.050488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.050511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.059660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.059685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.068653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.068678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.533 [2024-11-26 20:31:31.077515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.533 [2024-11-26 20:31:31.077541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.086517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.086541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.095496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.095519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.104603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.104628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.113156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.113183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.121680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.121708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.130734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.130760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.139832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.139856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.148837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.148863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.157962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.157988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.166415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.166440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.174982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.175011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.184038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.184067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.193133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.193158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.202218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.202246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.211228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.211254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.220278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.220306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.228614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.228639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.235408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.235434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.246361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.246385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.254971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.254996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.263891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.263918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.270460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.270485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.281177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.281201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.289830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.289856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.298954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.298979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.307915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.307940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.316765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.316791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.325082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.325105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.334292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.795 [2024-11-26 20:31:31.334315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.795 [2024-11-26 20:31:31.343448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.796 [2024-11-26 20:31:31.343473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.351814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.351840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.360135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.360163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.369312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.369337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.378380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.378403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.387394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.387418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.396434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.396458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.405392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.405417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.413856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.413880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.420436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.420462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.431403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.431430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.440098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.440124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.449264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.449291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.457850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.457874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.467088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.467113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.475652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.475675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.482440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.482463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.492556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.492583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.501585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.501619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.509994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.510018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.519215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.519240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.527694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.527718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.536247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.536271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.545306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.545332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.554489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.554515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.563479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.563504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.571886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.571911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.580304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.580329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.588685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.588708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.597671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.597694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.056 [2024-11-26 20:31:31.606056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.056 [2024-11-26 20:31:31.606081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.620435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.620458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.628032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.628058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.637310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.637336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.646539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.646565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.655065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.655092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.663523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.663550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.672507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.672532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.681538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.681566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.690402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.690428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.699379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.699407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.707771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.707798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.716865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.716890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.725310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.725335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.734259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.734287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.743422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.743448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.751782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.751808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.760650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.760676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 16745.00 IOPS, 130.82 MiB/s [2024-11-26T20:31:31.872Z] [2024-11-26 20:31:31.769536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.769561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.778747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.778772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.792995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.793020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.801962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.801989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.811035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.811060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.820174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.820200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.828442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.828469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.837002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.837028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.845629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.845655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.854687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.854711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.317 [2024-11-26 20:31:31.863180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.317 [2024-11-26 20:31:31.863205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.872180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.872206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.881375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.881400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.896108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.896132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.907147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.907169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.916129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.916153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.930708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.930730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.939473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.939499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.948422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.948445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.957674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.957706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.966693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.966715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.975975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.975999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.985111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.985134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:31.999579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:31.999614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.008964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.008990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.017791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.017814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.026886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.026909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.036254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.036278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.045500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.045524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.054174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.054199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.062658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.062683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.071279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.071306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.079796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.079820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.088321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.088346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.097369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.097395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.105926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.105957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.115225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.115249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.123769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.123792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.132409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.132431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.139122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.139148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.150153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.150175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.624 [2024-11-26 20:31:32.158824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.624 [2024-11-26 20:31:32.158847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.167325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.167351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.176650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.176673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.185087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.185110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.193705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.193727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.203061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.203085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.209861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.209883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.220141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.220163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.229001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.229023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.237467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.237490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.244210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.244232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.255437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.255459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.264069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.264091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.273338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.273360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.280038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.280061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.291260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.291285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.300551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.300574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.309046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.309068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.318261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.318283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.326856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.326880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.336141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.336164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.345426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.345449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.353950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.353975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.362517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.362539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.371610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.371630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.380251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.380274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.388776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.388799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.397914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.397945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.406583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.406614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.415833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.415857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.424426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.424451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.433100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.433123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.442372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.442395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.451049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.451071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.906 [2024-11-26 20:31:32.459652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.906 [2024-11-26 20:31:32.459675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.468233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.468257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.476780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.476803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.485376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.485399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.494023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.494051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.502595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.502617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.511141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.511162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.519770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.519793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.528268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.528291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.536951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.536974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.167 [2024-11-26 20:31:32.545542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.167 [2024-11-26 20:31:32.545564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.552236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.552259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.563142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.563167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.572240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.572265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.578992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.579017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.589280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.589305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.598162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.598191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.606760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.606788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.615431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.615455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.622159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.622185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.633262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.633287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.642225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.642251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.650882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.650905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.659501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.659524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.668062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.668085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.676543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.676564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.685235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.685257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.694559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.694583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.703802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.703826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.168 [2024-11-26 20:31:32.712949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.168 [2024-11-26 20:31:32.712972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.721518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.721542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.730697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.730719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.739303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.739327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.747855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.747877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.754693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.754716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 16712.80 IOPS, 130.57 MiB/s [2024-11-26T20:31:32.984Z] [2024-11-26 20:31:32.764082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.764105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 00:08:18.429 Latency(us) 00:08:18.429 [2024-11-26T20:31:32.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.429 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:18.429 Nvme1n1 : 5.01 16715.09 130.59 0.00 0.00 7649.84 2684.46 18350.08 00:08:18.429 [2024-11-26T20:31:32.984Z] =================================================================================================================== 00:08:18.429 [2024-11-26T20:31:32.984Z] Total : 16715.09 130.59 0.00 0.00 7649.84 2684.46 18350.08 00:08:18.429 [2024-11-26 20:31:32.770201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.770225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.778199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.778219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.786206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.786231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.794198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.794223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.802198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.802217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.810201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.810223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.818198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.818218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.826201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.826221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.834204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.834222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.842206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.842224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.850206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.850224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.858208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.858225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.866211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.866231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.874211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.874228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 [2024-11-26 20:31:32.882214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.429 [2024-11-26 20:31:32.882230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.429 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (64873) - No such process 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 64873 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.429 delay0 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.429 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:18.690 [2024-11-26 20:31:33.076942] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:25.273 Initializing NVMe Controllers 00:08:25.273 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.273 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:25.273 Initialization complete. Launching workers. 00:08:25.273 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 269, failed: 17524 00:08:25.273 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17692, failed to submit 101 00:08:25.273 success 17623, unsuccessful 69, failed 0 00:08:25.273 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:25.273 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:25.273 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.273 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.597 rmmod nvme_tcp 00:08:25.597 rmmod nvme_fabrics 00:08:25.597 rmmod nvme_keyring 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 64724 ']' 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 64724 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 64724 ']' 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 64724 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64724 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:25.597 killing process with pid 64724 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64724' 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 64724 00:08:25.597 20:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 64724 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:25.597 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:25.855 00:08:25.855 real 0m24.766s 00:08:25.855 user 0m41.601s 00:08:25.855 sys 0m5.244s 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.855 ************************************ 00:08:25.855 END TEST nvmf_zcopy 00:08:25.855 ************************************ 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:25.855 20:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.856 20:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.856 20:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.856 ************************************ 00:08:25.856 START TEST nvmf_nmic 00:08:25.856 ************************************ 00:08:25.856 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:25.856 * Looking for test storage... 00:08:25.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:25.856 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.856 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.856 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.115 --rc genhtml_branch_coverage=1 00:08:26.115 --rc genhtml_function_coverage=1 00:08:26.115 --rc genhtml_legend=1 00:08:26.115 --rc geninfo_all_blocks=1 00:08:26.115 --rc geninfo_unexecuted_blocks=1 00:08:26.115 00:08:26.115 ' 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.115 --rc genhtml_branch_coverage=1 00:08:26.115 --rc genhtml_function_coverage=1 00:08:26.115 --rc genhtml_legend=1 00:08:26.115 --rc geninfo_all_blocks=1 00:08:26.115 --rc geninfo_unexecuted_blocks=1 00:08:26.115 00:08:26.115 ' 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.115 --rc genhtml_branch_coverage=1 00:08:26.115 --rc genhtml_function_coverage=1 00:08:26.115 --rc genhtml_legend=1 00:08:26.115 --rc geninfo_all_blocks=1 00:08:26.115 --rc geninfo_unexecuted_blocks=1 00:08:26.115 00:08:26.115 ' 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.115 --rc genhtml_branch_coverage=1 00:08:26.115 --rc genhtml_function_coverage=1 00:08:26.115 --rc genhtml_legend=1 00:08:26.115 --rc geninfo_all_blocks=1 00:08:26.115 --rc geninfo_unexecuted_blocks=1 00:08:26.115 00:08:26.115 ' 00:08:26.115 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.116 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:26.116 Cannot find device "nvmf_init_br" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:26.116 Cannot find device "nvmf_init_br2" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:26.116 Cannot find device "nvmf_tgt_br" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:26.116 Cannot find device "nvmf_tgt_br2" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:26.116 Cannot find device "nvmf_init_br" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:26.116 Cannot find device "nvmf_init_br2" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:26.116 Cannot find device "nvmf_tgt_br" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:26.116 Cannot find device "nvmf_tgt_br2" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:26.116 Cannot find device "nvmf_br" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:26.116 Cannot find device "nvmf_init_if" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:26.116 Cannot find device "nvmf_init_if2" 00:08:26.116 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:26.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:26.117 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:26.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:26.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:26.375 00:08:26.375 --- 10.0.0.3 ping statistics --- 00:08:26.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.375 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:26.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:26.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.024 ms 00:08:26.375 00:08:26.375 --- 10.0.0.4 ping statistics --- 00:08:26.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.375 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:26.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:26.375 00:08:26.375 --- 10.0.0.1 ping statistics --- 00:08:26.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.375 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:26.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:08:26.375 00:08:26.375 --- 10.0.0.2 ping statistics --- 00:08:26.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.375 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65254 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65254 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65254 ']' 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.375 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.375 [2024-11-26 20:31:40.764918] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:08:26.375 [2024-11-26 20:31:40.764963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.375 [2024-11-26 20:31:40.900969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.636 [2024-11-26 20:31:40.934375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.636 [2024-11-26 20:31:40.934411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.636 [2024-11-26 20:31:40.934416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.636 [2024-11-26 20:31:40.934420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.636 [2024-11-26 20:31:40.934423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.636 [2024-11-26 20:31:40.935048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.636 [2024-11-26 20:31:40.935164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.636 [2024-11-26 20:31:40.935103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.636 [2024-11-26 20:31:40.935168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.636 [2024-11-26 20:31:40.966382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.202 [2024-11-26 20:31:41.687042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.202 Malloc0 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.202 [2024-11-26 20:31:41.741038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:27.202 test case1: single bdev can't be used in multiple subsystems 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.202 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.459 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.460 [2024-11-26 20:31:41.764947] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:27.460 [2024-11-26 20:31:41.764976] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:27.460 [2024-11-26 20:31:41.764981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.460 request: 00:08:27.460 { 00:08:27.460 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:27.460 "namespace": { 00:08:27.460 "bdev_name": "Malloc0", 00:08:27.460 "no_auto_visible": false, 00:08:27.460 "hide_metadata": false 00:08:27.460 }, 00:08:27.460 "method": "nvmf_subsystem_add_ns", 00:08:27.460 "req_id": 1 00:08:27.460 } 00:08:27.460 Got JSON-RPC error response 00:08:27.460 response: 00:08:27.460 { 00:08:27.460 "code": -32602, 00:08:27.460 "message": "Invalid parameters" 00:08:27.460 } 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:27.460 Adding namespace failed - expected result. 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:27.460 test case2: host connect to nvmf target in multiple paths 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:27.460 [2024-11-26 20:31:41.777053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:27.460 20:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:27.718 20:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.718 20:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:27.718 20:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.718 20:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:27.718 20:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:29.638 20:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:29.638 20:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:29.638 20:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.638 20:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:29.638 20:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.638 20:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:29.638 20:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:29.638 [global] 00:08:29.638 thread=1 00:08:29.638 invalidate=1 00:08:29.638 rw=write 00:08:29.638 time_based=1 00:08:29.638 runtime=1 00:08:29.638 ioengine=libaio 00:08:29.638 direct=1 00:08:29.638 bs=4096 00:08:29.638 iodepth=1 00:08:29.638 norandommap=0 00:08:29.638 numjobs=1 00:08:29.638 00:08:29.638 verify_dump=1 00:08:29.638 verify_backlog=512 00:08:29.638 verify_state_save=0 00:08:29.638 do_verify=1 00:08:29.638 verify=crc32c-intel 00:08:29.638 [job0] 00:08:29.638 filename=/dev/nvme0n1 00:08:29.638 Could not set queue depth (nvme0n1) 00:08:29.638 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.638 fio-3.35 00:08:29.638 Starting 1 thread 00:08:31.010 00:08:31.010 job0: (groupid=0, jobs=1): err= 0: pid=65346: Tue Nov 26 20:31:45 2024 00:08:31.010 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:08:31.010 slat (nsec): min=4468, max=57141, avg=6267.07, stdev=1760.87 00:08:31.010 clat (usec): min=78, max=559, avg=135.82, stdev=27.17 00:08:31.010 lat (usec): min=84, max=571, avg=142.09, stdev=27.39 00:08:31.010 clat percentiles (usec): 00:08:31.010 | 1.00th=[ 92], 5.00th=[ 103], 10.00th=[ 109], 20.00th=[ 116], 00:08:31.010 | 30.00th=[ 122], 40.00th=[ 128], 50.00th=[ 135], 60.00th=[ 141], 00:08:31.010 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 169], 00:08:31.010 | 99.00th=[ 202], 99.50th=[ 293], 99.90th=[ 371], 99.95th=[ 424], 00:08:31.010 | 99.99th=[ 562] 00:08:31.010 write: IOPS=4293, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1001msec); 0 zone resets 00:08:31.010 slat (usec): min=7, max=150, avg=10.25, stdev= 3.83 00:08:31.010 clat (usec): min=51, max=746, avg=85.31, stdev=21.33 00:08:31.010 lat (usec): min=60, max=755, avg=95.56, stdev=22.31 00:08:31.010 clat percentiles (usec): 00:08:31.010 | 1.00th=[ 56], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 72], 00:08:31.010 | 30.00th=[ 76], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 89], 00:08:31.010 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 102], 95.00th=[ 108], 00:08:31.010 | 99.00th=[ 137], 99.50th=[ 192], 99.90th=[ 235], 99.95th=[ 445], 00:08:31.010 | 99.99th=[ 750] 00:08:31.010 bw ( KiB/s): min=16384, max=16384, per=95.40%, avg=16384.00, stdev= 0.00, samples=1 00:08:31.010 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:31.010 lat (usec) : 100=45.77%, 250=53.76%, 500=0.43%, 750=0.04% 00:08:31.010 cpu : usr=1.50%, sys=5.60%, ctx=8394, majf=0, minf=5 00:08:31.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.010 issued rwts: total=4096,4298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.010 00:08:31.010 Run status group 0 (all jobs): 00:08:31.010 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:08:31.010 WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=16.8MiB (17.6MB), run=1001-1001msec 00:08:31.010 00:08:31.010 Disk stats (read/write): 00:08:31.010 nvme0n1: ios=3634/3822, merge=0/0, ticks=509/342, in_queue=851, util=91.09% 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:31.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:31.010 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:31.011 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.011 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.268 rmmod nvme_tcp 00:08:31.268 rmmod nvme_fabrics 00:08:31.268 rmmod nvme_keyring 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65254 ']' 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65254 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65254 ']' 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65254 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65254 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.268 killing process with pid 65254 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65254' 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65254 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65254 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.268 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:31.526 20:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:31.526 00:08:31.526 real 0m5.793s 00:08:31.526 user 0m18.743s 00:08:31.526 sys 0m1.709s 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.526 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.526 ************************************ 00:08:31.526 END TEST nvmf_nmic 00:08:31.526 ************************************ 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.785 ************************************ 00:08:31.785 START TEST nvmf_fio_target 00:08:31.785 ************************************ 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:31.785 * Looking for test storage... 00:08:31.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.785 --rc genhtml_branch_coverage=1 00:08:31.785 --rc genhtml_function_coverage=1 00:08:31.785 --rc genhtml_legend=1 00:08:31.785 --rc geninfo_all_blocks=1 00:08:31.785 --rc geninfo_unexecuted_blocks=1 00:08:31.785 00:08:31.785 ' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.785 --rc genhtml_branch_coverage=1 00:08:31.785 --rc genhtml_function_coverage=1 00:08:31.785 --rc genhtml_legend=1 00:08:31.785 --rc geninfo_all_blocks=1 00:08:31.785 --rc geninfo_unexecuted_blocks=1 00:08:31.785 00:08:31.785 ' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.785 --rc genhtml_branch_coverage=1 00:08:31.785 --rc genhtml_function_coverage=1 00:08:31.785 --rc genhtml_legend=1 00:08:31.785 --rc geninfo_all_blocks=1 00:08:31.785 --rc geninfo_unexecuted_blocks=1 00:08:31.785 00:08:31.785 ' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.785 --rc genhtml_branch_coverage=1 00:08:31.785 --rc genhtml_function_coverage=1 00:08:31.785 --rc genhtml_legend=1 00:08:31.785 --rc geninfo_all_blocks=1 00:08:31.785 --rc geninfo_unexecuted_blocks=1 00:08:31.785 00:08:31.785 ' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.785 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.786 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:31.786 Cannot find device "nvmf_init_br" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:31.786 Cannot find device "nvmf_init_br2" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:31.786 Cannot find device "nvmf_tgt_br" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.786 Cannot find device "nvmf_tgt_br2" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:31.786 Cannot find device "nvmf_init_br" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:31.786 Cannot find device "nvmf_init_br2" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:31.786 Cannot find device "nvmf_tgt_br" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:31.786 Cannot find device "nvmf_tgt_br2" 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:31.786 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:32.076 Cannot find device "nvmf_br" 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:32.076 Cannot find device "nvmf_init_if" 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:32.076 Cannot find device "nvmf_init_if2" 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:32.076 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:32.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:32.076 00:08:32.076 --- 10.0.0.3 ping statistics --- 00:08:32.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.076 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:32.077 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:32.077 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:08:32.077 00:08:32.077 --- 10.0.0.4 ping statistics --- 00:08:32.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.077 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:32.077 00:08:32.077 --- 10.0.0.1 ping statistics --- 00:08:32.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.077 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:32.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:08:32.077 00:08:32.077 --- 10.0.0.2 ping statistics --- 00:08:32.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.077 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65578 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65578 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 65578 ']' 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:32.077 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.077 [2024-11-26 20:31:46.615093] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:08:32.077 [2024-11-26 20:31:46.615148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.334 [2024-11-26 20:31:46.744475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.334 [2024-11-26 20:31:46.775584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.334 [2024-11-26 20:31:46.775625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.334 [2024-11-26 20:31:46.775630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.334 [2024-11-26 20:31:46.775634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.334 [2024-11-26 20:31:46.775638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.334 [2024-11-26 20:31:46.776259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.334 [2024-11-26 20:31:46.776653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.334 [2024-11-26 20:31:46.776929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.334 [2024-11-26 20:31:46.776930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.334 [2024-11-26 20:31:46.804939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:33.266 [2024-11-26 20:31:47.649513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.266 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.524 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:33.524 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.524 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:33.524 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.780 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:33.780 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.038 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:34.038 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:34.294 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.294 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:34.294 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.551 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:34.551 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.807 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:34.807 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:34.807 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:35.063 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:35.063 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.319 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:35.319 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.576 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:35.576 [2024-11-26 20:31:50.112733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:35.834 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:35.834 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:36.091 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:36.349 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:36.349 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:36.349 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:36.349 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:36.349 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:36.349 20:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:38.249 20:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:38.249 20:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:38.249 20:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:38.249 20:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:38.249 20:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:38.249 20:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:38.249 20:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:38.249 [global] 00:08:38.249 thread=1 00:08:38.249 invalidate=1 00:08:38.249 rw=write 00:08:38.249 time_based=1 00:08:38.249 runtime=1 00:08:38.249 ioengine=libaio 00:08:38.249 direct=1 00:08:38.249 bs=4096 00:08:38.249 iodepth=1 00:08:38.249 norandommap=0 00:08:38.249 numjobs=1 00:08:38.249 00:08:38.249 verify_dump=1 00:08:38.249 verify_backlog=512 00:08:38.249 verify_state_save=0 00:08:38.249 do_verify=1 00:08:38.249 verify=crc32c-intel 00:08:38.249 [job0] 00:08:38.249 filename=/dev/nvme0n1 00:08:38.249 [job1] 00:08:38.249 filename=/dev/nvme0n2 00:08:38.249 [job2] 00:08:38.249 filename=/dev/nvme0n3 00:08:38.249 [job3] 00:08:38.249 filename=/dev/nvme0n4 00:08:38.249 Could not set queue depth (nvme0n1) 00:08:38.249 Could not set queue depth (nvme0n2) 00:08:38.249 Could not set queue depth (nvme0n3) 00:08:38.249 Could not set queue depth (nvme0n4) 00:08:38.509 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.509 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.509 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.509 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.509 fio-3.35 00:08:38.509 Starting 4 threads 00:08:39.889 00:08:39.889 job0: (groupid=0, jobs=1): err= 0: pid=65751: Tue Nov 26 20:31:54 2024 00:08:39.889 read: IOPS=3120, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:08:39.889 slat (nsec): min=5297, max=55657, avg=6730.37, stdev=3353.09 00:08:39.889 clat (usec): min=100, max=389, avg=155.31, stdev=16.67 00:08:39.889 lat (usec): min=132, max=395, avg=162.04, stdev=17.69 00:08:39.889 clat percentiles (usec): 00:08:39.889 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:08:39.889 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:08:39.889 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 184], 00:08:39.889 | 99.00th=[ 206], 99.50th=[ 219], 99.90th=[ 347], 99.95th=[ 379], 00:08:39.889 | 99.99th=[ 392] 00:08:39.889 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:39.889 slat (usec): min=5, max=137, avg= 8.17, stdev= 3.71 00:08:39.889 clat (usec): min=67, max=768, avg=127.98, stdev=18.49 00:08:39.889 lat (usec): min=91, max=777, avg=136.15, stdev=18.87 00:08:39.889 clat percentiles (usec): 00:08:39.889 | 1.00th=[ 108], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 119], 00:08:39.889 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 128], 00:08:39.889 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 153], 00:08:39.889 | 99.00th=[ 172], 99.50th=[ 182], 99.90th=[ 338], 99.95th=[ 437], 00:08:39.889 | 99.99th=[ 766] 00:08:39.889 bw ( KiB/s): min=14440, max=14440, per=20.78%, avg=14440.00, stdev= 0.00, samples=1 00:08:39.889 iops : min= 3610, max= 3610, avg=3610.00, stdev= 0.00, samples=1 00:08:39.889 lat (usec) : 100=0.19%, 250=99.61%, 500=0.18%, 1000=0.01% 00:08:39.889 cpu : usr=1.40%, sys=4.60%, ctx=6711, majf=0, minf=11 00:08:39.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.889 issued rwts: total=3124,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.889 job1: (groupid=0, jobs=1): err= 0: pid=65752: Tue Nov 26 20:31:54 2024 00:08:39.889 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:08:39.889 slat (nsec): min=5379, max=73920, avg=6998.55, stdev=3561.18 00:08:39.889 clat (usec): min=72, max=420, avg=97.93, stdev=16.23 00:08:39.889 lat (usec): min=78, max=426, avg=104.93, stdev=17.19 00:08:39.889 clat percentiles (usec): 00:08:39.889 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:08:39.889 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 98], 00:08:39.889 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 121], 00:08:39.889 | 99.00th=[ 141], 99.50th=[ 151], 99.90th=[ 306], 99.95th=[ 400], 00:08:39.889 | 99.99th=[ 420] 00:08:39.889 write: IOPS=5273, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1001msec); 0 zone resets 00:08:39.889 slat (usec): min=6, max=116, avg=12.21, stdev= 7.13 00:08:39.889 clat (usec): min=51, max=397, avg=73.40, stdev=13.99 00:08:39.889 lat (usec): min=60, max=406, avg=85.61, stdev=17.23 00:08:39.889 clat percentiles (usec): 00:08:39.889 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 65], 00:08:39.889 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:08:39.889 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 96], 00:08:39.889 | 99.00th=[ 115], 99.50th=[ 123], 99.90th=[ 229], 99.95th=[ 241], 00:08:39.889 | 99.99th=[ 396] 00:08:39.889 bw ( KiB/s): min=22856, max=22856, per=32.89%, avg=22856.00, stdev= 0.00, samples=1 00:08:39.889 iops : min= 5714, max= 5714, avg=5714.00, stdev= 0.00, samples=1 00:08:39.889 lat (usec) : 100=81.89%, 250=18.01%, 500=0.10% 00:08:39.889 cpu : usr=2.30%, sys=8.20%, ctx=10399, majf=0, minf=13 00:08:39.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.889 issued rwts: total=5120,5279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.890 job2: (groupid=0, jobs=1): err= 0: pid=65753: Tue Nov 26 20:31:54 2024 00:08:39.890 read: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:08:39.890 slat (nsec): min=4017, max=57611, avg=5301.08, stdev=2469.07 00:08:39.890 clat (usec): min=86, max=433, avg=156.92, stdev=18.22 00:08:39.890 lat (usec): min=94, max=438, avg=162.23, stdev=18.85 00:08:39.890 clat percentiles (usec): 00:08:39.890 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 145], 00:08:39.890 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:08:39.890 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 186], 00:08:39.890 | 99.00th=[ 210], 99.50th=[ 221], 99.90th=[ 367], 99.95th=[ 408], 00:08:39.890 | 99.99th=[ 433] 00:08:39.890 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:39.890 slat (usec): min=5, max=116, avg=10.01, stdev= 3.49 00:08:39.890 clat (usec): min=65, max=726, avg=125.99, stdev=17.78 00:08:39.890 lat (usec): min=89, max=732, avg=135.99, stdev=18.16 00:08:39.890 clat percentiles (usec): 00:08:39.890 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 117], 00:08:39.890 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 127], 00:08:39.890 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 151], 00:08:39.890 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 338], 99.95th=[ 404], 00:08:39.890 | 99.99th=[ 725] 00:08:39.890 bw ( KiB/s): min=14424, max=14424, per=20.76%, avg=14424.00, stdev= 0.00, samples=1 00:08:39.890 iops : min= 3606, max= 3606, avg=3606.00, stdev= 0.00, samples=1 00:08:39.890 lat (usec) : 100=0.19%, 250=99.60%, 500=0.19%, 750=0.01% 00:08:39.890 cpu : usr=1.20%, sys=4.80%, ctx=6717, majf=0, minf=13 00:08:39.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.890 issued rwts: total=3125,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.890 job3: (groupid=0, jobs=1): err= 0: pid=65754: Tue Nov 26 20:31:54 2024 00:08:39.890 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:08:39.890 slat (nsec): min=5381, max=76726, avg=6777.46, stdev=3094.99 00:08:39.890 clat (usec): min=82, max=3303, avg=106.02, stdev=49.13 00:08:39.890 lat (usec): min=88, max=3320, avg=112.80, stdev=49.49 00:08:39.890 clat percentiles (usec): 00:08:39.890 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 96], 00:08:39.890 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 106], 00:08:39.890 | 70.00th=[ 110], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 130], 00:08:39.890 | 99.00th=[ 147], 99.50th=[ 157], 99.90th=[ 231], 99.95th=[ 318], 00:08:39.890 | 99.99th=[ 3294] 00:08:39.890 write: IOPS=4938, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1001msec); 0 zone resets 00:08:39.890 slat (usec): min=7, max=100, avg=10.54, stdev= 3.99 00:08:39.890 clat (usec): min=57, max=4416, avg=84.75, stdev=140.05 00:08:39.890 lat (usec): min=67, max=4431, avg=95.29, stdev=140.54 00:08:39.890 clat percentiles (usec): 00:08:39.890 | 1.00th=[ 63], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:08:39.890 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 79], 00:08:39.890 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 93], 95.00th=[ 99], 00:08:39.890 | 99.00th=[ 121], 99.50th=[ 133], 99.90th=[ 3359], 99.95th=[ 3392], 00:08:39.890 | 99.99th=[ 4424] 00:08:39.890 bw ( KiB/s): min=20480, max=20480, per=29.47%, avg=20480.00, stdev= 0.00, samples=1 00:08:39.890 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:39.890 lat (usec) : 100=67.90%, 250=31.93%, 500=0.06% 00:08:39.890 lat (msec) : 2=0.01%, 4=0.08%, 10=0.01% 00:08:39.890 cpu : usr=2.10%, sys=6.70%, ctx=9551, majf=0, minf=11 00:08:39.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.890 issued rwts: total=4608,4943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.890 00:08:39.890 Run status group 0 (all jobs): 00:08:39.890 READ: bw=62.3MiB/s (65.4MB/s), 12.2MiB/s-20.0MiB/s (12.8MB/s-20.9MB/s), io=62.4MiB (65.4MB), run=1001-1001msec 00:08:39.890 WRITE: bw=67.9MiB/s (71.2MB/s), 14.0MiB/s-20.6MiB/s (14.7MB/s-21.6MB/s), io=67.9MiB (71.2MB), run=1001-1001msec 00:08:39.890 00:08:39.890 Disk stats (read/write): 00:08:39.890 nvme0n1: ios=2843/3072, merge=0/0, ticks=451/358, in_queue=809, util=89.28% 00:08:39.890 nvme0n2: ios=4620/4608, merge=0/0, ticks=473/357, in_queue=830, util=89.62% 00:08:39.890 nvme0n3: ios=2825/3072, merge=0/0, ticks=445/388, in_queue=833, util=90.33% 00:08:39.890 nvme0n4: ios=4096/4231, merge=0/0, ticks=436/360, in_queue=796, util=88.66% 00:08:39.890 20:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:39.890 [global] 00:08:39.890 thread=1 00:08:39.890 invalidate=1 00:08:39.890 rw=randwrite 00:08:39.890 time_based=1 00:08:39.890 runtime=1 00:08:39.890 ioengine=libaio 00:08:39.890 direct=1 00:08:39.890 bs=4096 00:08:39.890 iodepth=1 00:08:39.890 norandommap=0 00:08:39.890 numjobs=1 00:08:39.890 00:08:39.890 verify_dump=1 00:08:39.890 verify_backlog=512 00:08:39.890 verify_state_save=0 00:08:39.890 do_verify=1 00:08:39.890 verify=crc32c-intel 00:08:39.890 [job0] 00:08:39.890 filename=/dev/nvme0n1 00:08:39.890 [job1] 00:08:39.890 filename=/dev/nvme0n2 00:08:39.890 [job2] 00:08:39.890 filename=/dev/nvme0n3 00:08:39.890 [job3] 00:08:39.890 filename=/dev/nvme0n4 00:08:39.890 Could not set queue depth (nvme0n1) 00:08:39.890 Could not set queue depth (nvme0n2) 00:08:39.890 Could not set queue depth (nvme0n3) 00:08:39.890 Could not set queue depth (nvme0n4) 00:08:39.890 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.890 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.890 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.890 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.890 fio-3.35 00:08:39.890 Starting 4 threads 00:08:40.825 00:08:40.825 job0: (groupid=0, jobs=1): err= 0: pid=65813: Tue Nov 26 20:31:55 2024 00:08:40.825 read: IOPS=5157, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1001msec) 00:08:40.825 slat (nsec): min=5102, max=38972, avg=5814.74, stdev=1383.86 00:08:40.825 clat (usec): min=74, max=390, avg=95.72, stdev=10.00 00:08:40.825 lat (usec): min=79, max=395, avg=101.53, stdev=10.13 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:08:40.825 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:08:40.825 | 70.00th=[ 99], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 113], 00:08:40.825 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 143], 99.95th=[ 145], 00:08:40.825 | 99.99th=[ 392] 00:08:40.825 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:08:40.825 slat (usec): min=6, max=120, avg=10.02, stdev= 4.47 00:08:40.825 clat (usec): min=53, max=1171, avg=72.80, stdev=18.81 00:08:40.825 lat (usec): min=63, max=1180, avg=82.82, stdev=19.66 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 66], 00:08:40.825 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:08:40.825 | 70.00th=[ 76], 80.00th=[ 79], 90.00th=[ 84], 95.00th=[ 90], 00:08:40.825 | 99.00th=[ 106], 99.50th=[ 117], 99.90th=[ 163], 99.95th=[ 260], 00:08:40.825 | 99.99th=[ 1172] 00:08:40.825 bw ( KiB/s): min=22184, max=22184, per=32.66%, avg=22184.00, stdev= 0.00, samples=1 00:08:40.825 iops : min= 5546, max= 5546, avg=5546.00, stdev= 0.00, samples=1 00:08:40.825 lat (usec) : 100=86.46%, 250=13.51%, 500=0.02%, 750=0.01% 00:08:40.825 lat (msec) : 2=0.01% 00:08:40.825 cpu : usr=1.90%, sys=7.20%, ctx=10796, majf=0, minf=17 00:08:40.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 issued rwts: total=5163,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:40.825 job1: (groupid=0, jobs=1): err= 0: pid=65814: Tue Nov 26 20:31:55 2024 00:08:40.825 read: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:08:40.825 slat (nsec): min=3910, max=18945, avg=5080.66, stdev=971.03 00:08:40.825 clat (usec): min=81, max=368, avg=148.67, stdev=13.31 00:08:40.825 lat (usec): min=86, max=372, avg=153.75, stdev=13.24 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:08:40.825 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:08:40.825 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:08:40.825 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 212], 99.95th=[ 310], 00:08:40.825 | 99.99th=[ 367] 00:08:40.825 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:40.825 slat (usec): min=5, max=114, avg= 8.69, stdev= 4.38 00:08:40.825 clat (usec): min=40, max=1267, avg=123.95, stdev=24.76 00:08:40.825 lat (usec): min=80, max=1274, avg=132.64, stdev=25.21 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 114], 00:08:40.825 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 125], 00:08:40.825 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 147], 00:08:40.825 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 289], 99.95th=[ 644], 00:08:40.825 | 99.99th=[ 1270] 00:08:40.825 bw ( KiB/s): min=15552, max=15552, per=22.89%, avg=15552.00, stdev= 0.00, samples=1 00:08:40.825 iops : min= 3888, max= 3888, avg=3888.00, stdev= 0.00, samples=1 00:08:40.825 lat (usec) : 50=0.01%, 100=0.29%, 250=99.61%, 500=0.06%, 750=0.01% 00:08:40.825 lat (msec) : 2=0.01% 00:08:40.825 cpu : usr=1.30%, sys=4.20%, ctx=6965, majf=0, minf=13 00:08:40.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 issued rwts: total=3377,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:40.825 job2: (groupid=0, jobs=1): err= 0: pid=65815: Tue Nov 26 20:31:55 2024 00:08:40.825 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:08:40.825 slat (nsec): min=4208, max=64120, avg=5530.43, stdev=3075.17 00:08:40.825 clat (usec): min=104, max=663, avg=131.46, stdev=23.26 00:08:40.825 lat (usec): min=109, max=667, avg=136.99, stdev=23.75 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 109], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 119], 00:08:40.825 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:08:40.825 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 153], 95.00th=[ 163], 00:08:40.825 | 99.00th=[ 223], 99.50th=[ 237], 99.90th=[ 379], 99.95th=[ 449], 00:08:40.825 | 99.99th=[ 660] 00:08:40.825 write: IOPS=4195, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1001msec); 0 zone resets 00:08:40.825 slat (usec): min=6, max=104, avg= 8.53, stdev= 3.48 00:08:40.825 clat (usec): min=72, max=1261, avg=94.52, stdev=26.08 00:08:40.825 lat (usec): min=79, max=1269, avg=103.05, stdev=26.47 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 85], 00:08:40.825 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:08:40.825 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 106], 95.00th=[ 114], 00:08:40.825 | 99.00th=[ 135], 99.50th=[ 161], 99.90th=[ 367], 99.95th=[ 445], 00:08:40.825 | 99.99th=[ 1254] 00:08:40.825 bw ( KiB/s): min=17288, max=17288, per=25.45%, avg=17288.00, stdev= 0.00, samples=1 00:08:40.825 iops : min= 4322, max= 4322, avg=4322.00, stdev= 0.00, samples=1 00:08:40.825 lat (usec) : 100=39.56%, 250=60.14%, 500=0.27%, 750=0.02% 00:08:40.825 lat (msec) : 2=0.01% 00:08:40.825 cpu : usr=1.00%, sys=5.10%, ctx=8296, majf=0, minf=5 00:08:40.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 issued rwts: total=4096,4200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:40.825 job3: (groupid=0, jobs=1): err= 0: pid=65816: Tue Nov 26 20:31:55 2024 00:08:40.825 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:08:40.825 slat (nsec): min=3979, max=23931, avg=5327.13, stdev=1024.21 00:08:40.825 clat (usec): min=92, max=395, avg=148.38, stdev=13.18 00:08:40.825 lat (usec): min=101, max=401, avg=153.71, stdev=13.20 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:08:40.825 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:08:40.825 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 172], 00:08:40.825 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 206], 99.95th=[ 326], 00:08:40.825 | 99.99th=[ 396] 00:08:40.825 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:40.825 slat (nsec): min=5498, max=95335, avg=9758.98, stdev=4148.26 00:08:40.825 clat (usec): min=75, max=1309, avg=122.93, stdev=24.60 00:08:40.825 lat (usec): min=89, max=1318, avg=132.69, stdev=25.29 00:08:40.825 clat percentiles (usec): 00:08:40.825 | 1.00th=[ 103], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 113], 00:08:40.825 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 124], 00:08:40.825 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 145], 00:08:40.825 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 281], 99.95th=[ 545], 00:08:40.825 | 99.99th=[ 1303] 00:08:40.825 bw ( KiB/s): min=15528, max=15528, per=22.86%, avg=15528.00, stdev= 0.00, samples=1 00:08:40.825 iops : min= 3882, max= 3882, avg=3882.00, stdev= 0.00, samples=1 00:08:40.825 lat (usec) : 100=0.26%, 250=99.66%, 500=0.06%, 750=0.01% 00:08:40.825 lat (msec) : 2=0.01% 00:08:40.825 cpu : usr=1.70%, sys=4.50%, ctx=6963, majf=0, minf=15 00:08:40.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.825 issued rwts: total=3376,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:40.825 00:08:40.825 Run status group 0 (all jobs): 00:08:40.825 READ: bw=62.5MiB/s (65.5MB/s), 13.2MiB/s-20.1MiB/s (13.8MB/s-21.1MB/s), io=62.5MiB (65.6MB), run=1001-1001msec 00:08:40.825 WRITE: bw=66.3MiB/s (69.6MB/s), 14.0MiB/s-22.0MiB/s (14.7MB/s-23.0MB/s), io=66.4MiB (69.6MB), run=1001-1001msec 00:08:40.825 00:08:40.825 Disk stats (read/write): 00:08:40.825 nvme0n1: ios=4658/4876, merge=0/0, ticks=460/371, in_queue=831, util=89.28% 00:08:40.826 nvme0n2: ios=3083/3072, merge=0/0, ticks=432/360, in_queue=792, util=89.24% 00:08:40.826 nvme0n3: ios=3601/3719, merge=0/0, ticks=487/356, in_queue=843, util=89.83% 00:08:40.826 nvme0n4: ios=3039/3072, merge=0/0, ticks=428/368, in_queue=796, util=90.00% 00:08:40.826 20:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:40.826 [global] 00:08:40.826 thread=1 00:08:40.826 invalidate=1 00:08:40.826 rw=write 00:08:40.826 time_based=1 00:08:40.826 runtime=1 00:08:40.826 ioengine=libaio 00:08:40.826 direct=1 00:08:40.826 bs=4096 00:08:40.826 iodepth=128 00:08:40.826 norandommap=0 00:08:40.826 numjobs=1 00:08:40.826 00:08:40.826 verify_dump=1 00:08:40.826 verify_backlog=512 00:08:40.826 verify_state_save=0 00:08:40.826 do_verify=1 00:08:40.826 verify=crc32c-intel 00:08:40.826 [job0] 00:08:40.826 filename=/dev/nvme0n1 00:08:40.826 [job1] 00:08:40.826 filename=/dev/nvme0n2 00:08:40.826 [job2] 00:08:40.826 filename=/dev/nvme0n3 00:08:40.826 [job3] 00:08:40.826 filename=/dev/nvme0n4 00:08:41.084 Could not set queue depth (nvme0n1) 00:08:41.084 Could not set queue depth (nvme0n2) 00:08:41.084 Could not set queue depth (nvme0n3) 00:08:41.084 Could not set queue depth (nvme0n4) 00:08:41.084 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:41.084 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:41.084 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:41.084 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:41.084 fio-3.35 00:08:41.084 Starting 4 threads 00:08:42.459 00:08:42.459 job0: (groupid=0, jobs=1): err= 0: pid=65870: Tue Nov 26 20:31:56 2024 00:08:42.459 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:08:42.459 slat (usec): min=3, max=6997, avg=134.59, stdev=592.48 00:08:42.459 clat (usec): min=8561, max=33094, avg=16776.37, stdev=4151.70 00:08:42.459 lat (usec): min=8573, max=33937, avg=16910.95, stdev=4201.97 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 9372], 5.00th=[12125], 10.00th=[13173], 20.00th=[13566], 00:08:42.459 | 30.00th=[13698], 40.00th=[14484], 50.00th=[15664], 60.00th=[16909], 00:08:42.459 | 70.00th=[18482], 80.00th=[19792], 90.00th=[22938], 95.00th=[25297], 00:08:42.459 | 99.00th=[30016], 99.50th=[30802], 99.90th=[33162], 99.95th=[33162], 00:08:42.459 | 99.99th=[33162] 00:08:42.459 write: IOPS=3489, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1005msec); 0 zone resets 00:08:42.459 slat (usec): min=6, max=3918, avg=162.19, stdev=576.21 00:08:42.459 clat (usec): min=4298, max=44101, avg=21493.33, stdev=9428.68 00:08:42.459 lat (usec): min=4315, max=44118, avg=21655.51, stdev=9490.67 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 7898], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[12780], 00:08:42.459 | 30.00th=[13173], 40.00th=[14615], 50.00th=[21103], 60.00th=[23725], 00:08:42.459 | 70.00th=[28705], 80.00th=[30802], 90.00th=[34866], 95.00th=[36963], 00:08:42.459 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:08:42.459 | 99.99th=[44303] 00:08:42.459 bw ( KiB/s): min=11984, max=15056, per=19.09%, avg=13520.00, stdev=2172.23, samples=2 00:08:42.459 iops : min= 2996, max= 3764, avg=3380.00, stdev=543.06, samples=2 00:08:42.459 lat (msec) : 10=4.59%, 20=60.18%, 50=35.23% 00:08:42.459 cpu : usr=1.69%, sys=6.77%, ctx=423, majf=0, minf=9 00:08:42.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:08:42.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.459 issued rwts: total=3072,3507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.459 job1: (groupid=0, jobs=1): err= 0: pid=65871: Tue Nov 26 20:31:56 2024 00:08:42.459 read: IOPS=4431, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1004msec) 00:08:42.459 slat (usec): min=2, max=11312, avg=120.17, stdev=666.45 00:08:42.459 clat (usec): min=1188, max=32066, avg=15554.64, stdev=5103.57 00:08:42.459 lat (usec): min=3670, max=32104, avg=15674.81, stdev=5096.77 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 6783], 5.00th=[10421], 10.00th=[11338], 20.00th=[11731], 00:08:42.459 | 30.00th=[11863], 40.00th=[12125], 50.00th=[14484], 60.00th=[15533], 00:08:42.459 | 70.00th=[17957], 80.00th=[19530], 90.00th=[21890], 95.00th=[27657], 00:08:42.459 | 99.00th=[30802], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:08:42.459 | 99.99th=[32113] 00:08:42.459 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:08:42.459 slat (usec): min=6, max=6421, avg=96.91, stdev=485.41 00:08:42.459 clat (usec): min=7206, max=18370, avg=12461.33, stdev=2711.72 00:08:42.459 lat (usec): min=8882, max=20133, avg=12558.24, stdev=2689.35 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9634], 00:08:42.459 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12256], 60.00th=[12649], 00:08:42.459 | 70.00th=[13173], 80.00th=[15270], 90.00th=[17171], 95.00th=[17433], 00:08:42.459 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18482], 00:08:42.459 | 99.99th=[18482] 00:08:42.459 bw ( KiB/s): min=16632, max=20272, per=26.05%, avg=18452.00, stdev=2573.87, samples=2 00:08:42.459 iops : min= 4158, max= 5068, avg=4613.00, stdev=643.47, samples=2 00:08:42.459 lat (msec) : 2=0.01%, 4=0.31%, 10=15.02%, 20=76.53%, 50=8.14% 00:08:42.459 cpu : usr=2.09%, sys=7.68%, ctx=302, majf=0, minf=13 00:08:42.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:42.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.459 issued rwts: total=4449,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.459 job2: (groupid=0, jobs=1): err= 0: pid=65872: Tue Nov 26 20:31:56 2024 00:08:42.459 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:08:42.459 slat (usec): min=2, max=7987, avg=106.95, stdev=617.16 00:08:42.459 clat (usec): min=7057, max=32713, avg=13916.02, stdev=5275.83 00:08:42.459 lat (usec): min=8531, max=32722, avg=14022.97, stdev=5282.95 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10159], 00:08:42.459 | 30.00th=[10421], 40.00th=[10552], 50.00th=[12125], 60.00th=[13960], 00:08:42.459 | 70.00th=[15139], 80.00th=[16712], 90.00th=[19530], 95.00th=[26608], 00:08:42.459 | 99.00th=[32375], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:08:42.459 | 99.99th=[32637] 00:08:42.459 write: IOPS=5435, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1001msec); 0 zone resets 00:08:42.459 slat (usec): min=8, max=7857, avg=78.87, stdev=401.33 00:08:42.459 clat (usec): min=571, max=25777, avg=10090.10, stdev=3055.62 00:08:42.459 lat (usec): min=2259, max=25790, avg=10168.97, stdev=3049.20 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 4621], 5.00th=[ 8029], 10.00th=[ 8094], 20.00th=[ 8225], 00:08:42.459 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9896], 00:08:42.459 | 70.00th=[10945], 80.00th=[11469], 90.00th=[13435], 95.00th=[17171], 00:08:42.459 | 99.00th=[20317], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:08:42.459 | 99.99th=[25822] 00:08:42.459 bw ( KiB/s): min=18733, max=23808, per=30.03%, avg=21270.50, stdev=3588.57, samples=2 00:08:42.459 iops : min= 4683, max= 5952, avg=5317.50, stdev=897.32, samples=2 00:08:42.459 lat (usec) : 750=0.01% 00:08:42.459 lat (msec) : 4=0.30%, 10=35.89%, 20=58.13%, 50=5.67% 00:08:42.459 cpu : usr=3.50%, sys=7.50%, ctx=332, majf=0, minf=17 00:08:42.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:42.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.459 issued rwts: total=5120,5441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.459 job3: (groupid=0, jobs=1): err= 0: pid=65873: Tue Nov 26 20:31:56 2024 00:08:42.459 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:08:42.459 slat (usec): min=3, max=5955, avg=112.52, stdev=562.20 00:08:42.459 clat (usec): min=7834, max=27057, avg=13877.26, stdev=2734.76 00:08:42.459 lat (usec): min=7846, max=27073, avg=13989.78, stdev=2779.57 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[11469], 20.00th=[11863], 00:08:42.459 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12518], 60.00th=[13698], 00:08:42.459 | 70.00th=[15664], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:08:42.459 | 99.00th=[21627], 99.50th=[23200], 99.90th=[27132], 99.95th=[27132], 00:08:42.459 | 99.99th=[27132] 00:08:42.459 write: IOPS=4222, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1004msec); 0 zone resets 00:08:42.459 slat (usec): min=5, max=4416, avg=122.52, stdev=469.11 00:08:42.459 clat (usec): min=3268, max=39136, avg=16562.95, stdev=8288.79 00:08:42.459 lat (usec): min=5240, max=39150, avg=16685.47, stdev=8341.43 00:08:42.459 clat percentiles (usec): 00:08:42.459 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 7963], 20.00th=[ 8291], 00:08:42.459 | 30.00th=[ 8717], 40.00th=[10028], 50.00th=[14222], 60.00th=[20317], 00:08:42.459 | 70.00th=[21365], 80.00th=[24773], 90.00th=[28967], 95.00th=[31065], 00:08:42.459 | 99.00th=[33162], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:08:42.459 | 99.99th=[39060] 00:08:42.459 bw ( KiB/s): min=12408, max=20521, per=23.25%, avg=16464.50, stdev=5736.76, samples=2 00:08:42.459 iops : min= 3102, max= 5130, avg=4116.00, stdev=1434.01, samples=2 00:08:42.460 lat (msec) : 4=0.01%, 10=21.64%, 20=55.52%, 50=22.82% 00:08:42.460 cpu : usr=2.49%, sys=6.88%, ctx=446, majf=0, minf=7 00:08:42.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:42.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.460 issued rwts: total=4096,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.460 00:08:42.460 Run status group 0 (all jobs): 00:08:42.460 READ: bw=65.1MiB/s (68.2MB/s), 11.9MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=65.4MiB (68.6MB), run=1001-1005msec 00:08:42.460 WRITE: bw=69.2MiB/s (72.5MB/s), 13.6MiB/s-21.2MiB/s (14.3MB/s-22.3MB/s), io=69.5MiB (72.9MB), run=1001-1005msec 00:08:42.460 00:08:42.460 Disk stats (read/write): 00:08:42.460 nvme0n1: ios=2759/3072, merge=0/0, ticks=15513/20093, in_queue=35606, util=88.88% 00:08:42.460 nvme0n2: ios=3665/4096, merge=0/0, ticks=14081/11888, in_queue=25969, util=88.73% 00:08:42.460 nvme0n3: ios=4475/4608, merge=0/0, ticks=15147/10799, in_queue=25946, util=89.43% 00:08:42.460 nvme0n4: ios=3584/3831, merge=0/0, ticks=24461/28043, in_queue=52504, util=89.85% 00:08:42.460 20:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:42.460 [global] 00:08:42.460 thread=1 00:08:42.460 invalidate=1 00:08:42.460 rw=randwrite 00:08:42.460 time_based=1 00:08:42.460 runtime=1 00:08:42.460 ioengine=libaio 00:08:42.460 direct=1 00:08:42.460 bs=4096 00:08:42.460 iodepth=128 00:08:42.460 norandommap=0 00:08:42.460 numjobs=1 00:08:42.460 00:08:42.460 verify_dump=1 00:08:42.460 verify_backlog=512 00:08:42.460 verify_state_save=0 00:08:42.460 do_verify=1 00:08:42.460 verify=crc32c-intel 00:08:42.460 [job0] 00:08:42.460 filename=/dev/nvme0n1 00:08:42.460 [job1] 00:08:42.460 filename=/dev/nvme0n2 00:08:42.460 [job2] 00:08:42.460 filename=/dev/nvme0n3 00:08:42.460 [job3] 00:08:42.460 filename=/dev/nvme0n4 00:08:42.460 Could not set queue depth (nvme0n1) 00:08:42.460 Could not set queue depth (nvme0n2) 00:08:42.460 Could not set queue depth (nvme0n3) 00:08:42.460 Could not set queue depth (nvme0n4) 00:08:42.460 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:42.460 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:42.460 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:42.460 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:42.460 fio-3.35 00:08:42.460 Starting 4 threads 00:08:43.464 00:08:43.464 job0: (groupid=0, jobs=1): err= 0: pid=65928: Tue Nov 26 20:31:58 2024 00:08:43.464 read: IOPS=6030, BW=23.6MiB/s (24.7MB/s)(23.7MiB/1005msec) 00:08:43.464 slat (usec): min=5, max=5612, avg=79.78, stdev=519.89 00:08:43.464 clat (usec): min=4489, max=17287, avg=10974.49, stdev=1301.23 00:08:43.464 lat (usec): min=4497, max=21073, avg=11054.27, stdev=1305.52 00:08:43.464 clat percentiles (usec): 00:08:43.464 | 1.00th=[ 6521], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10683], 00:08:43.464 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11207], 00:08:43.464 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:08:43.464 | 99.00th=[16450], 99.50th=[16450], 99.90th=[17171], 99.95th=[17171], 00:08:43.464 | 99.99th=[17171] 00:08:43.464 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:08:43.464 slat (usec): min=3, max=8072, avg=79.78, stdev=517.05 00:08:43.464 clat (usec): min=4842, max=14427, avg=9906.69, stdev=1093.89 00:08:43.464 lat (usec): min=5687, max=14441, avg=9986.48, stdev=996.70 00:08:43.464 clat percentiles (usec): 00:08:43.464 | 1.00th=[ 5866], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:08:43.464 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:08:43.464 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[11076], 00:08:43.464 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14353], 99.95th=[14353], 00:08:43.464 | 99.99th=[14484] 00:08:43.464 bw ( KiB/s): min=24576, max=24625, per=22.88%, avg=24600.50, stdev=34.65, samples=2 00:08:43.464 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 00:08:43.464 lat (msec) : 10=27.04%, 20=72.96% 00:08:43.464 cpu : usr=2.49%, sys=9.66%, ctx=288, majf=0, minf=13 00:08:43.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:43.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.464 issued rwts: total=6061,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.464 job1: (groupid=0, jobs=1): err= 0: pid=65929: Tue Nov 26 20:31:58 2024 00:08:43.464 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:08:43.464 slat (usec): min=3, max=4957, avg=65.14, stdev=423.05 00:08:43.464 clat (usec): min=5294, max=14489, avg=9072.21, stdev=991.73 00:08:43.464 lat (usec): min=5302, max=17337, avg=9137.35, stdev=1013.53 00:08:43.464 clat percentiles (usec): 00:08:43.464 | 1.00th=[ 5735], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:08:43.464 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 9110], 00:08:43.464 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:08:43.464 | 99.00th=[13698], 99.50th=[14091], 99.90th=[14484], 99.95th=[14484], 00:08:43.464 | 99.99th=[14484] 00:08:43.464 write: IOPS=7584, BW=29.6MiB/s (31.1MB/s)(29.7MiB/1004msec); 0 zone resets 00:08:43.464 slat (usec): min=7, max=6000, avg=66.18, stdev=405.37 00:08:43.464 clat (usec): min=481, max=11768, avg=8167.15, stdev=861.23 00:08:43.464 lat (usec): min=4257, max=11790, avg=8233.32, stdev=781.72 00:08:43.464 clat percentiles (usec): 00:08:43.464 | 1.00th=[ 4817], 5.00th=[ 7177], 10.00th=[ 7439], 20.00th=[ 7701], 00:08:43.464 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8356], 00:08:43.464 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9241], 00:08:43.464 | 99.00th=[11207], 99.50th=[11600], 99.90th=[11731], 99.95th=[11731], 00:08:43.464 | 99.99th=[11731] 00:08:43.464 bw ( KiB/s): min=29298, max=30656, per=27.88%, avg=29977.00, stdev=960.25, samples=2 00:08:43.464 iops : min= 7324, max= 7664, avg=7494.00, stdev=240.42, samples=2 00:08:43.464 lat (usec) : 500=0.01% 00:08:43.464 lat (msec) : 10=96.20%, 20=3.79% 00:08:43.464 cpu : usr=3.99%, sys=10.87%, ctx=317, majf=0, minf=12 00:08:43.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:43.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.464 issued rwts: total=7168,7615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.464 job2: (groupid=0, jobs=1): err= 0: pid=65930: Tue Nov 26 20:31:58 2024 00:08:43.464 read: IOPS=6294, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1007msec) 00:08:43.464 slat (usec): min=5, max=5693, avg=74.80, stdev=496.48 00:08:43.464 clat (usec): min=1653, max=16738, avg=10302.52, stdev=1249.53 00:08:43.464 lat (usec): min=5476, max=20264, avg=10377.32, stdev=1265.81 00:08:43.464 clat percentiles (usec): 00:08:43.464 | 1.00th=[ 6325], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9765], 00:08:43.464 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:08:43.464 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:08:43.465 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16712], 99.95th=[16712], 00:08:43.465 | 99.99th=[16712] 00:08:43.465 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:08:43.465 slat (usec): min=6, max=7413, avg=75.32, stdev=473.84 00:08:43.465 clat (usec): min=4544, max=13731, avg=9361.43, stdev=968.06 00:08:43.465 lat (usec): min=6342, max=13747, avg=9436.75, stdev=871.84 00:08:43.465 clat percentiles (usec): 00:08:43.465 | 1.00th=[ 5866], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:08:43.465 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:08:43.465 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10683], 00:08:43.465 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:08:43.465 | 99.99th=[13698] 00:08:43.465 bw ( KiB/s): min=26616, max=26632, per=24.76%, avg=26624.00, stdev=11.31, samples=2 00:08:43.465 iops : min= 6654, max= 6658, avg=6656.00, stdev= 2.83, samples=2 00:08:43.465 lat (msec) : 2=0.01%, 10=58.91%, 20=41.09% 00:08:43.465 cpu : usr=2.58%, sys=10.93%, ctx=275, majf=0, minf=13 00:08:43.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:43.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.465 issued rwts: total=6339,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.465 job3: (groupid=0, jobs=1): err= 0: pid=65931: Tue Nov 26 20:31:58 2024 00:08:43.465 read: IOPS=6557, BW=25.6MiB/s (26.9MB/s)(25.8MiB/1007msec) 00:08:43.465 slat (usec): min=5, max=5791, avg=73.02, stdev=480.99 00:08:43.465 clat (usec): min=1623, max=16351, avg=10037.49, stdev=1166.27 00:08:43.465 lat (usec): min=5813, max=19327, avg=10110.51, stdev=1177.50 00:08:43.465 clat percentiles (usec): 00:08:43.465 | 1.00th=[ 6128], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9634], 00:08:43.465 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:08:43.465 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11338], 00:08:43.465 | 99.00th=[14877], 99.50th=[15664], 99.90th=[16319], 99.95th=[16319], 00:08:43.465 | 99.99th=[16319] 00:08:43.465 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:08:43.465 slat (usec): min=5, max=7949, avg=74.15, stdev=466.41 00:08:43.465 clat (usec): min=4552, max=14134, avg=9229.68, stdev=1011.85 00:08:43.465 lat (usec): min=6113, max=14148, avg=9303.82, stdev=925.53 00:08:43.465 clat percentiles (usec): 00:08:43.465 | 1.00th=[ 5932], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8586], 00:08:43.465 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:08:43.465 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10421], 00:08:43.465 | 99.00th=[13304], 99.50th=[14091], 99.90th=[14091], 99.95th=[14091], 00:08:43.465 | 99.99th=[14091] 00:08:43.465 bw ( KiB/s): min=26120, max=27182, per=24.78%, avg=26651.00, stdev=750.95, samples=2 00:08:43.465 iops : min= 6530, max= 6795, avg=6662.50, stdev=187.38, samples=2 00:08:43.465 lat (msec) : 2=0.01%, 10=68.78%, 20=31.21% 00:08:43.465 cpu : usr=3.98%, sys=9.74%, ctx=286, majf=0, minf=15 00:08:43.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:43.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.465 issued rwts: total=6603,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.465 00:08:43.465 Run status group 0 (all jobs): 00:08:43.465 READ: bw=102MiB/s (106MB/s), 23.6MiB/s-27.9MiB/s (24.7MB/s-29.2MB/s), io=102MiB (107MB), run=1004-1007msec 00:08:43.465 WRITE: bw=105MiB/s (110MB/s), 23.9MiB/s-29.6MiB/s (25.0MB/s-31.1MB/s), io=106MiB (111MB), run=1004-1007msec 00:08:43.465 00:08:43.465 Disk stats (read/write): 00:08:43.465 nvme0n1: ios=5170/5566, merge=0/0, ticks=54308/52337, in_queue=106645, util=89.88% 00:08:43.465 nvme0n2: ios=6319/6656, merge=0/0, ticks=54423/51001, in_queue=105424, util=90.45% 00:08:43.465 nvme0n3: ios=5651/5704, merge=0/0, ticks=54913/50242, in_queue=105155, util=90.37% 00:08:43.465 nvme0n4: ios=5649/5952, merge=0/0, ticks=53965/51825, in_queue=105790, util=90.24% 00:08:43.465 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:43.725 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=65949 00:08:43.725 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:43.725 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:43.725 [global] 00:08:43.725 thread=1 00:08:43.725 invalidate=1 00:08:43.725 rw=read 00:08:43.725 time_based=1 00:08:43.725 runtime=10 00:08:43.725 ioengine=libaio 00:08:43.725 direct=1 00:08:43.725 bs=4096 00:08:43.725 iodepth=1 00:08:43.725 norandommap=1 00:08:43.725 numjobs=1 00:08:43.725 00:08:43.725 [job0] 00:08:43.725 filename=/dev/nvme0n1 00:08:43.725 [job1] 00:08:43.725 filename=/dev/nvme0n2 00:08:43.725 [job2] 00:08:43.725 filename=/dev/nvme0n3 00:08:43.725 [job3] 00:08:43.725 filename=/dev/nvme0n4 00:08:43.725 Could not set queue depth (nvme0n1) 00:08:43.725 Could not set queue depth (nvme0n2) 00:08:43.725 Could not set queue depth (nvme0n3) 00:08:43.725 Could not set queue depth (nvme0n4) 00:08:43.725 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.725 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.725 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.725 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.725 fio-3.35 00:08:43.725 Starting 4 threads 00:08:47.022 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:47.022 fio: pid=65993, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:47.022 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=73232384, buflen=4096 00:08:47.022 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:47.022 fio: pid=65992, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:47.022 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=100040704, buflen=4096 00:08:47.022 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:47.022 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:47.282 fio: pid=65989, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:47.282 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=63164416, buflen=4096 00:08:47.282 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:47.282 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:47.539 fio: pid=65990, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:47.539 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=27746304, buflen=4096 00:08:47.539 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:47.539 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:47.539 00:08:47.539 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65989: Tue Nov 26 20:32:01 2024 00:08:47.539 read: IOPS=9580, BW=37.4MiB/s (39.2MB/s)(124MiB/3320msec) 00:08:47.539 slat (usec): min=4, max=14212, avg= 7.62, stdev=135.97 00:08:47.539 clat (usec): min=35, max=2131, avg=96.17, stdev=20.45 00:08:47.539 lat (usec): min=76, max=14332, avg=103.78, stdev=137.81 00:08:47.539 clat percentiles (usec): 00:08:47.539 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:08:47.539 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 97], 00:08:47.539 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 117], 00:08:47.539 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 223], 99.95th=[ 338], 00:08:47.539 | 99.99th=[ 1057] 00:08:47.539 bw ( KiB/s): min=36504, max=39536, per=35.45%, avg=38673.33, stdev=1117.59, samples=6 00:08:47.539 iops : min= 9126, max= 9884, avg=9668.33, stdev=279.40, samples=6 00:08:47.539 lat (usec) : 50=0.01%, 100=73.12%, 250=26.80%, 500=0.05%, 750=0.01% 00:08:47.539 lat (msec) : 2=0.01%, 4=0.01% 00:08:47.539 cpu : usr=1.08%, sys=5.94%, ctx=31810, majf=0, minf=1 00:08:47.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.539 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.539 issued rwts: total=31806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.539 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65990: Tue Nov 26 20:32:01 2024 00:08:47.539 read: IOPS=6494, BW=25.4MiB/s (26.6MB/s)(90.5MiB/3566msec) 00:08:47.539 slat (usec): min=3, max=9655, avg= 8.46, stdev=139.73 00:08:47.539 clat (usec): min=67, max=6467, avg=144.86, stdev=99.55 00:08:47.539 lat (usec): min=74, max=9777, avg=153.33, stdev=172.41 00:08:47.539 clat percentiles (usec): 00:08:47.539 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 89], 20.00th=[ 101], 00:08:47.539 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:08:47.539 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 184], 00:08:47.539 | 99.00th=[ 208], 99.50th=[ 231], 99.90th=[ 603], 99.95th=[ 3326], 00:08:47.539 | 99.99th=[ 3949] 00:08:47.539 bw ( KiB/s): min=23112, max=24576, per=21.99%, avg=23993.33, stdev=575.45, samples=6 00:08:47.539 iops : min= 5778, max= 6144, avg=5998.33, stdev=143.86, samples=6 00:08:47.539 lat (usec) : 100=19.63%, 250=80.04%, 500=0.19%, 750=0.05% 00:08:47.539 lat (msec) : 2=0.02%, 4=0.06%, 10=0.01% 00:08:47.539 cpu : usr=0.65%, sys=4.15%, ctx=23169, majf=0, minf=2 00:08:47.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.539 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.539 issued rwts: total=23159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.539 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65992: Tue Nov 26 20:32:01 2024 00:08:47.539 read: IOPS=7828, BW=30.6MiB/s (32.1MB/s)(95.4MiB/3120msec) 00:08:47.539 slat (usec): min=5, max=12634, avg= 7.08, stdev=106.71 00:08:47.539 clat (usec): min=75, max=4698, avg=120.06, stdev=48.58 00:08:47.539 lat (usec): min=80, max=12800, avg=127.14, stdev=117.70 00:08:47.539 clat percentiles (usec): 00:08:47.539 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 100], 00:08:47.539 | 30.00th=[ 109], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 122], 00:08:47.539 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 149], 00:08:47.539 | 99.00th=[ 247], 99.50th=[ 375], 99.90th=[ 570], 99.95th=[ 758], 00:08:47.539 | 99.99th=[ 1467] 00:08:47.539 bw ( KiB/s): min=29408, max=37080, per=28.94%, avg=31580.00, stdev=2763.79, samples=6 00:08:47.539 iops : min= 7352, max= 9270, avg=7895.00, stdev=690.95, samples=6 00:08:47.539 lat (usec) : 100=19.62%, 250=79.39%, 500=0.84%, 750=0.09%, 1000=0.03% 00:08:47.539 lat (msec) : 2=0.02%, 10=0.01% 00:08:47.539 cpu : usr=0.77%, sys=4.68%, ctx=24447, majf=0, minf=1 00:08:47.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.539 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.539 issued rwts: total=24425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.539 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65993: Tue Nov 26 20:32:01 2024 00:08:47.539 read: IOPS=6127, BW=23.9MiB/s (25.1MB/s)(69.8MiB/2918msec) 00:08:47.539 slat (usec): min=5, max=162, avg= 5.89, stdev= 2.09 00:08:47.539 clat (usec): min=79, max=1576, avg=156.72, stdev=25.24 00:08:47.540 lat (usec): min=84, max=1587, avg=162.61, stdev=25.43 00:08:47.540 clat percentiles (usec): 00:08:47.540 | 1.00th=[ 93], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:08:47.540 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:08:47.540 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 186], 00:08:47.540 | 99.00th=[ 206], 99.50th=[ 221], 99.90th=[ 392], 99.95th=[ 498], 00:08:47.540 | 99.99th=[ 1237] 00:08:47.540 bw ( KiB/s): min=24160, max=24904, per=22.45%, avg=24497.60, stdev=302.91, samples=5 00:08:47.540 iops : min= 6040, max= 6226, avg=6124.40, stdev=75.73, samples=5 00:08:47.540 lat (usec) : 100=2.00%, 250=97.76%, 500=0.19%, 750=0.03%, 1000=0.01% 00:08:47.540 lat (msec) : 2=0.01% 00:08:47.540 cpu : usr=0.51%, sys=3.81%, ctx=17881, majf=0, minf=2 00:08:47.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.540 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.540 issued rwts: total=17880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.540 00:08:47.540 Run status group 0 (all jobs): 00:08:47.540 READ: bw=107MiB/s (112MB/s), 23.9MiB/s-37.4MiB/s (25.1MB/s-39.2MB/s), io=380MiB (398MB), run=2918-3566msec 00:08:47.540 00:08:47.540 Disk stats (read/write): 00:08:47.540 nvme0n1: ios=30088/0, merge=0/0, ticks=2898/0, in_queue=2898, util=95.47% 00:08:47.540 nvme0n2: ios=21361/0, merge=0/0, ticks=3158/0, in_queue=3158, util=95.02% 00:08:47.540 nvme0n3: ios=23062/0, merge=0/0, ticks=2760/0, in_queue=2760, util=96.62% 00:08:47.540 nvme0n4: ios=17685/0, merge=0/0, ticks=2771/0, in_queue=2771, util=96.71% 00:08:47.797 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:47.797 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:47.797 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:47.797 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:48.055 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:48.055 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:48.313 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:48.313 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 65949 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:48.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:48.572 20:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.572 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:48.572 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.572 nvmf hotplug test: fio failed as expected 00:08:48.572 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:48.572 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:48.572 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:48.572 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.830 rmmod nvme_tcp 00:08:48.830 rmmod nvme_fabrics 00:08:48.830 rmmod nvme_keyring 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65578 ']' 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65578 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 65578 ']' 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 65578 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65578 00:08:48.830 killing process with pid 65578 00:08:48.830 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.831 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.831 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65578' 00:08:48.831 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 65578 00:08:48.831 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 65578 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:49.088 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:08:49.347 00:08:49.347 real 0m17.606s 00:08:49.347 user 1m6.079s 00:08:49.347 sys 0m7.947s 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.347 ************************************ 00:08:49.347 END TEST nvmf_fio_target 00:08:49.347 ************************************ 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.347 ************************************ 00:08:49.347 START TEST nvmf_bdevio 00:08:49.347 ************************************ 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:49.347 * Looking for test storage... 00:08:49.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.347 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.348 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.348 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.348 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.348 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.348 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.607 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:49.607 Cannot find device "nvmf_init_br" 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:49.607 Cannot find device "nvmf_init_br2" 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:49.607 Cannot find device "nvmf_tgt_br" 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.607 Cannot find device "nvmf_tgt_br2" 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:49.607 Cannot find device "nvmf_init_br" 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:49.607 Cannot find device "nvmf_init_br2" 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:49.607 Cannot find device "nvmf_tgt_br" 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:08:49.607 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:49.607 Cannot find device "nvmf_tgt_br2" 00:08:49.608 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:08:49.608 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:49.608 Cannot find device "nvmf_br" 00:08:49.608 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:08:49.608 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:49.608 Cannot find device "nvmf_init_if" 00:08:49.608 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:08:49.608 20:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:49.608 Cannot find device "nvmf_init_if2" 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:49.608 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:49.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:08:49.867 00:08:49.867 --- 10.0.0.3 ping statistics --- 00:08:49.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.867 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:49.867 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:49.867 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:08:49.867 00:08:49.867 --- 10.0.0.4 ping statistics --- 00:08:49.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.867 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:49.867 00:08:49.867 --- 10.0.0.1 ping statistics --- 00:08:49.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.867 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:49.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:08:49.867 00:08:49.867 --- 10.0.0.2 ping statistics --- 00:08:49.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.867 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66300 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66300 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66300 ']' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.867 20:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:49.867 [2024-11-26 20:32:04.247647] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:08:49.867 [2024-11-26 20:32:04.247697] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.867 [2024-11-26 20:32:04.383726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.867 [2024-11-26 20:32:04.417811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.867 [2024-11-26 20:32:04.417853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.867 [2024-11-26 20:32:04.417858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.867 [2024-11-26 20:32:04.417863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.867 [2024-11-26 20:32:04.417866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.867 [2024-11-26 20:32:04.419097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:49.867 [2024-11-26 20:32:04.419152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:49.867 [2024-11-26 20:32:04.419370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.867 [2024-11-26 20:32:04.419372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:50.186 [2024-11-26 20:32:04.449190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.783 [2024-11-26 20:32:05.126143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.783 Malloc0 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.783 [2024-11-26 20:32:05.184952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:50.783 { 00:08:50.783 "params": { 00:08:50.783 "name": "Nvme$subsystem", 00:08:50.783 "trtype": "$TEST_TRANSPORT", 00:08:50.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.783 "adrfam": "ipv4", 00:08:50.783 "trsvcid": "$NVMF_PORT", 00:08:50.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.783 "hdgst": ${hdgst:-false}, 00:08:50.783 "ddgst": ${ddgst:-false} 00:08:50.783 }, 00:08:50.783 "method": "bdev_nvme_attach_controller" 00:08:50.783 } 00:08:50.783 EOF 00:08:50.783 )") 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:50.783 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:50.783 "params": { 00:08:50.783 "name": "Nvme1", 00:08:50.783 "trtype": "tcp", 00:08:50.783 "traddr": "10.0.0.3", 00:08:50.783 "adrfam": "ipv4", 00:08:50.783 "trsvcid": "4420", 00:08:50.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.783 "hdgst": false, 00:08:50.783 "ddgst": false 00:08:50.783 }, 00:08:50.783 "method": "bdev_nvme_attach_controller" 00:08:50.783 }' 00:08:50.783 [2024-11-26 20:32:05.225650] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:08:50.783 [2024-11-26 20:32:05.225711] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66336 ] 00:08:51.042 [2024-11-26 20:32:05.364165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.042 [2024-11-26 20:32:05.402139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.042 [2024-11-26 20:32:05.402258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.042 [2024-11-26 20:32:05.402509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.042 [2024-11-26 20:32:05.443207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.042 I/O targets: 00:08:51.042 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:51.042 00:08:51.042 00:08:51.042 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.042 http://cunit.sourceforge.net/ 00:08:51.042 00:08:51.042 00:08:51.042 Suite: bdevio tests on: Nvme1n1 00:08:51.042 Test: blockdev write read block ...passed 00:08:51.042 Test: blockdev write zeroes read block ...passed 00:08:51.042 Test: blockdev write zeroes read no split ...passed 00:08:51.042 Test: blockdev write zeroes read split ...passed 00:08:51.042 Test: blockdev write zeroes read split partial ...passed 00:08:51.042 Test: blockdev reset ...[2024-11-26 20:32:05.577730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:51.042 [2024-11-26 20:32:05.577823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2015190 (9): Bad file descriptor 00:08:51.042 [2024-11-26 20:32:05.591243] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:51.042 passed 00:08:51.042 Test: blockdev write read 8 blocks ...passed 00:08:51.042 Test: blockdev write read size > 128k ...passed 00:08:51.042 Test: blockdev write read invalid size ...passed 00:08:51.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:51.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:51.042 Test: blockdev write read max offset ...passed 00:08:51.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:51.042 Test: blockdev writev readv 8 blocks ...passed 00:08:51.042 Test: blockdev writev readv 30 x 1block ...passed 00:08:51.042 Test: blockdev writev readv block ...passed 00:08:51.042 Test: blockdev writev readv size > 128k ...passed 00:08:51.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:51.302 Test: blockdev comparev and writev ...[2024-11-26 20:32:05.596443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.596483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.596497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.596504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.596752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.596768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.596783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.596792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.597140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.597155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.597167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.597172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.597424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.597437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.597449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:51.302 [2024-11-26 20:32:05.597455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:51.302 passed 00:08:51.302 Test: blockdev nvme passthru rw ...passed 00:08:51.302 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:32:05.597956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:51.302 [2024-11-26 20:32:05.597966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.598070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:51.302 [2024-11-26 20:32:05.598088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:51.302 [2024-11-26 20:32:05.598170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:51.302 [2024-11-26 20:32:05.598182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:51.302 passed 00:08:51.302 Test: blockdev nvme admin passthru ...[2024-11-26 20:32:05.598257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:51.302 [2024-11-26 20:32:05.598268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:51.302 passed 00:08:51.302 Test: blockdev copy ...passed 00:08:51.302 00:08:51.302 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.302 suites 1 1 n/a 0 0 00:08:51.302 tests 23 23 23 0 0 00:08:51.302 asserts 152 152 152 0 n/a 00:08:51.302 00:08:51.302 Elapsed time = 0.134 seconds 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.302 rmmod nvme_tcp 00:08:51.302 rmmod nvme_fabrics 00:08:51.302 rmmod nvme_keyring 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66300 ']' 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66300 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66300 ']' 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66300 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66300 00:08:51.302 killing process with pid 66300 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66300' 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66300 00:08:51.302 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66300 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:51.561 20:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:51.561 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:08:51.819 00:08:51.819 real 0m2.432s 00:08:51.819 user 0m7.285s 00:08:51.819 sys 0m0.631s 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:51.819 ************************************ 00:08:51.819 END TEST nvmf_bdevio 00:08:51.819 ************************************ 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:51.819 ************************************ 00:08:51.819 END TEST nvmf_target_core 00:08:51.819 ************************************ 00:08:51.819 00:08:51.819 real 2m33.260s 00:08:51.819 user 7m0.250s 00:08:51.819 sys 0m39.866s 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.819 20:32:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:51.819 20:32:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.819 20:32:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.819 20:32:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.819 ************************************ 00:08:51.819 START TEST nvmf_target_extra 00:08:51.819 ************************************ 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:51.819 * Looking for test storage... 00:08:51.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:51.819 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.079 --rc genhtml_branch_coverage=1 00:08:52.079 --rc genhtml_function_coverage=1 00:08:52.079 --rc genhtml_legend=1 00:08:52.079 --rc geninfo_all_blocks=1 00:08:52.079 --rc geninfo_unexecuted_blocks=1 00:08:52.079 00:08:52.079 ' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.079 --rc genhtml_branch_coverage=1 00:08:52.079 --rc genhtml_function_coverage=1 00:08:52.079 --rc genhtml_legend=1 00:08:52.079 --rc geninfo_all_blocks=1 00:08:52.079 --rc geninfo_unexecuted_blocks=1 00:08:52.079 00:08:52.079 ' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.079 --rc genhtml_branch_coverage=1 00:08:52.079 --rc genhtml_function_coverage=1 00:08:52.079 --rc genhtml_legend=1 00:08:52.079 --rc geninfo_all_blocks=1 00:08:52.079 --rc geninfo_unexecuted_blocks=1 00:08:52.079 00:08:52.079 ' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.079 --rc genhtml_branch_coverage=1 00:08:52.079 --rc genhtml_function_coverage=1 00:08:52.079 --rc genhtml_legend=1 00:08:52.079 --rc geninfo_all_blocks=1 00:08:52.079 --rc geninfo_unexecuted_blocks=1 00:08:52.079 00:08:52.079 ' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.079 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:52.079 ************************************ 00:08:52.079 START TEST nvmf_auth_target 00:08:52.079 ************************************ 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:52.079 * Looking for test storage... 00:08:52.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.079 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.080 --rc genhtml_branch_coverage=1 00:08:52.080 --rc genhtml_function_coverage=1 00:08:52.080 --rc genhtml_legend=1 00:08:52.080 --rc geninfo_all_blocks=1 00:08:52.080 --rc geninfo_unexecuted_blocks=1 00:08:52.080 00:08:52.080 ' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.080 --rc genhtml_branch_coverage=1 00:08:52.080 --rc genhtml_function_coverage=1 00:08:52.080 --rc genhtml_legend=1 00:08:52.080 --rc geninfo_all_blocks=1 00:08:52.080 --rc geninfo_unexecuted_blocks=1 00:08:52.080 00:08:52.080 ' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.080 --rc genhtml_branch_coverage=1 00:08:52.080 --rc genhtml_function_coverage=1 00:08:52.080 --rc genhtml_legend=1 00:08:52.080 --rc geninfo_all_blocks=1 00:08:52.080 --rc geninfo_unexecuted_blocks=1 00:08:52.080 00:08:52.080 ' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.080 --rc genhtml_branch_coverage=1 00:08:52.080 --rc genhtml_function_coverage=1 00:08:52.080 --rc genhtml_legend=1 00:08:52.080 --rc geninfo_all_blocks=1 00:08:52.080 --rc geninfo_unexecuted_blocks=1 00:08:52.080 00:08:52.080 ' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.080 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.080 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:52.081 Cannot find device "nvmf_init_br" 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:52.081 Cannot find device "nvmf_init_br2" 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:52.081 Cannot find device "nvmf_tgt_br" 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.081 Cannot find device "nvmf_tgt_br2" 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:52.081 Cannot find device "nvmf_init_br" 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:52.081 Cannot find device "nvmf_init_br2" 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:08:52.081 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:52.340 Cannot find device "nvmf_tgt_br" 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:52.340 Cannot find device "nvmf_tgt_br2" 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:52.340 Cannot find device "nvmf_br" 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:52.340 Cannot find device "nvmf_init_if" 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:52.340 Cannot find device "nvmf_init_if2" 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:52.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:52.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:52.340 00:08:52.340 --- 10.0.0.3 ping statistics --- 00:08:52.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.340 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:52.340 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:52.340 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:08:52.340 00:08:52.340 --- 10.0.0.4 ping statistics --- 00:08:52.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.340 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:52.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:52.340 00:08:52.340 --- 10.0.0.1 ping statistics --- 00:08:52.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.340 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:52.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:08:52.340 00:08:52.340 --- 10.0.0.2 ping statistics --- 00:08:52.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.340 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=66619 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 66619 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66619 ']' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.340 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.599 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66651 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:08:53.666 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa937271a7995e9ee4416076bc7f4a62b3d0710a850cf967 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hs9 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa937271a7995e9ee4416076bc7f4a62b3d0710a850cf967 0 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa937271a7995e9ee4416076bc7f4a62b3d0710a850cf967 0 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa937271a7995e9ee4416076bc7f4a62b3d0710a850cf967 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hs9 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hs9 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.hs9 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2272e7c031e5b16bce40216bc8bfd58785152d6f61f613e9c4fb520a3dc6000d 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ho5 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2272e7c031e5b16bce40216bc8bfd58785152d6f61f613e9c4fb520a3dc6000d 3 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2272e7c031e5b16bce40216bc8bfd58785152d6f61f613e9c4fb520a3dc6000d 3 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2272e7c031e5b16bce40216bc8bfd58785152d6f61f613e9c4fb520a3dc6000d 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ho5 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ho5 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Ho5 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=71972976e754b2d05da5516065258e10 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2wi 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 71972976e754b2d05da5516065258e10 1 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 71972976e754b2d05da5516065258e10 1 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=71972976e754b2d05da5516065258e10 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2wi 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2wi 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.2wi 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f817f4cadb3eb6769f265adb6340caa1fd360c2a2db7d6b2 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ArJ 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f817f4cadb3eb6769f265adb6340caa1fd360c2a2db7d6b2 2 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f817f4cadb3eb6769f265adb6340caa1fd360c2a2db7d6b2 2 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f817f4cadb3eb6769f265adb6340caa1fd360c2a2db7d6b2 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:53.667 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ArJ 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ArJ 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ArJ 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e9cf5b93a471dc1ae7d73963a274bb0ff57fe457527861ba 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4TD 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e9cf5b93a471dc1ae7d73963a274bb0ff57fe457527861ba 2 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e9cf5b93a471dc1ae7d73963a274bb0ff57fe457527861ba 2 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e9cf5b93a471dc1ae7d73963a274bb0ff57fe457527861ba 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4TD 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4TD 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4TD 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:53.667 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c6432cbef45b5a76489f9b29eec1ad26 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oRY 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c6432cbef45b5a76489f9b29eec1ad26 1 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c6432cbef45b5a76489f9b29eec1ad26 1 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c6432cbef45b5a76489f9b29eec1ad26 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oRY 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oRY 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.oRY 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4c58f4264587c3c322c373edee8b0793fcd2ae663fd0a9bf8deedc4b7e381c63 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tct 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4c58f4264587c3c322c373edee8b0793fcd2ae663fd0a9bf8deedc4b7e381c63 3 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4c58f4264587c3c322c373edee8b0793fcd2ae663fd0a9bf8deedc4b7e381c63 3 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4c58f4264587c3c322c373edee8b0793fcd2ae663fd0a9bf8deedc4b7e381c63 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tct 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tct 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tct 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66619 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66619 ']' 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.668 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66651 /var/tmp/host.sock 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66651 ']' 00:08:53.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.925 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hs9 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hs9 00:08:54.183 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hs9 00:08:54.442 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Ho5 ]] 00:08:54.442 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ho5 00:08:54.442 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.442 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.442 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.442 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ho5 00:08:54.442 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ho5 00:08:54.700 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:54.700 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2wi 00:08:54.700 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.700 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.700 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.700 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2wi 00:08:54.700 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2wi 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ArJ ]] 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ArJ 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ArJ 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ArJ 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4TD 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.956 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4TD 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4TD 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.oRY ]] 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oRY 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oRY 00:08:55.213 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oRY 00:08:55.472 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:55.472 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tct 00:08:55.472 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.472 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:55.472 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.472 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tct 00:08:55.472 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tct 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.732 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:55.990 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.990 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:55.990 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:55.990 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:55.990 00:08:55.990 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:55.990 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:55.990 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:56.248 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:56.248 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:56.248 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.248 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.248 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:56.248 { 00:08:56.248 "cntlid": 1, 00:08:56.248 "qid": 0, 00:08:56.248 "state": "enabled", 00:08:56.248 "thread": "nvmf_tgt_poll_group_000", 00:08:56.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:08:56.248 "listen_address": { 00:08:56.248 "trtype": "TCP", 00:08:56.248 "adrfam": "IPv4", 00:08:56.248 "traddr": "10.0.0.3", 00:08:56.248 "trsvcid": "4420" 00:08:56.248 }, 00:08:56.248 "peer_address": { 00:08:56.248 "trtype": "TCP", 00:08:56.249 "adrfam": "IPv4", 00:08:56.249 "traddr": "10.0.0.1", 00:08:56.249 "trsvcid": "52290" 00:08:56.249 }, 00:08:56.249 "auth": { 00:08:56.249 "state": "completed", 00:08:56.249 "digest": "sha256", 00:08:56.249 "dhgroup": "null" 00:08:56.249 } 00:08:56.249 } 00:08:56.249 ]' 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:56.249 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:56.506 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:08:56.506 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:00.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:00.690 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:00.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:00.948 00:09:00.948 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:00.948 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:00.948 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:01.206 { 00:09:01.206 "cntlid": 3, 00:09:01.206 "qid": 0, 00:09:01.206 "state": "enabled", 00:09:01.206 "thread": "nvmf_tgt_poll_group_000", 00:09:01.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:01.206 "listen_address": { 00:09:01.206 "trtype": "TCP", 00:09:01.206 "adrfam": "IPv4", 00:09:01.206 "traddr": "10.0.0.3", 00:09:01.206 "trsvcid": "4420" 00:09:01.206 }, 00:09:01.206 "peer_address": { 00:09:01.206 "trtype": "TCP", 00:09:01.206 "adrfam": "IPv4", 00:09:01.206 "traddr": "10.0.0.1", 00:09:01.206 "trsvcid": "48526" 00:09:01.206 }, 00:09:01.206 "auth": { 00:09:01.206 "state": "completed", 00:09:01.206 "digest": "sha256", 00:09:01.206 "dhgroup": "null" 00:09:01.206 } 00:09:01.206 } 00:09:01.206 ]' 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:01.206 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:01.463 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:01.463 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:02.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:02.028 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:02.286 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:02.544 00:09:02.545 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:02.545 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:02.545 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:02.802 { 00:09:02.802 "cntlid": 5, 00:09:02.802 "qid": 0, 00:09:02.802 "state": "enabled", 00:09:02.802 "thread": "nvmf_tgt_poll_group_000", 00:09:02.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:02.802 "listen_address": { 00:09:02.802 "trtype": "TCP", 00:09:02.802 "adrfam": "IPv4", 00:09:02.802 "traddr": "10.0.0.3", 00:09:02.802 "trsvcid": "4420" 00:09:02.802 }, 00:09:02.802 "peer_address": { 00:09:02.802 "trtype": "TCP", 00:09:02.802 "adrfam": "IPv4", 00:09:02.802 "traddr": "10.0.0.1", 00:09:02.802 "trsvcid": "48552" 00:09:02.802 }, 00:09:02.802 "auth": { 00:09:02.802 "state": "completed", 00:09:02.802 "digest": "sha256", 00:09:02.802 "dhgroup": "null" 00:09:02.802 } 00:09:02.802 } 00:09:02.802 ]' 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:02.802 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:03.061 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:03.061 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:03.061 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:03.061 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:03.061 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:03.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:03.627 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:03.885 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:04.143 00:09:04.143 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:04.143 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:04.143 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:04.401 { 00:09:04.401 "cntlid": 7, 00:09:04.401 "qid": 0, 00:09:04.401 "state": "enabled", 00:09:04.401 "thread": "nvmf_tgt_poll_group_000", 00:09:04.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:04.401 "listen_address": { 00:09:04.401 "trtype": "TCP", 00:09:04.401 "adrfam": "IPv4", 00:09:04.401 "traddr": "10.0.0.3", 00:09:04.401 "trsvcid": "4420" 00:09:04.401 }, 00:09:04.401 "peer_address": { 00:09:04.401 "trtype": "TCP", 00:09:04.401 "adrfam": "IPv4", 00:09:04.401 "traddr": "10.0.0.1", 00:09:04.401 "trsvcid": "48572" 00:09:04.401 }, 00:09:04.401 "auth": { 00:09:04.401 "state": "completed", 00:09:04.401 "digest": "sha256", 00:09:04.401 "dhgroup": "null" 00:09:04.401 } 00:09:04.401 } 00:09:04.401 ]' 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:04.401 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:04.662 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:04.662 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:04.662 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:04.662 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:04.662 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:05.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:05.601 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:05.601 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:05.859 00:09:05.859 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:05.859 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:05.859 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:06.119 { 00:09:06.119 "cntlid": 9, 00:09:06.119 "qid": 0, 00:09:06.119 "state": "enabled", 00:09:06.119 "thread": "nvmf_tgt_poll_group_000", 00:09:06.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:06.119 "listen_address": { 00:09:06.119 "trtype": "TCP", 00:09:06.119 "adrfam": "IPv4", 00:09:06.119 "traddr": "10.0.0.3", 00:09:06.119 "trsvcid": "4420" 00:09:06.119 }, 00:09:06.119 "peer_address": { 00:09:06.119 "trtype": "TCP", 00:09:06.119 "adrfam": "IPv4", 00:09:06.119 "traddr": "10.0.0.1", 00:09:06.119 "trsvcid": "48588" 00:09:06.119 }, 00:09:06.119 "auth": { 00:09:06.119 "state": "completed", 00:09:06.119 "digest": "sha256", 00:09:06.119 "dhgroup": "ffdhe2048" 00:09:06.119 } 00:09:06.119 } 00:09:06.119 ]' 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:06.119 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:06.378 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:06.378 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:06.378 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:06.378 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:06.378 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:06.378 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:06.378 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:07.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:07.015 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.275 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.534 00:09:07.534 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:07.534 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:07.534 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:07.796 { 00:09:07.796 "cntlid": 11, 00:09:07.796 "qid": 0, 00:09:07.796 "state": "enabled", 00:09:07.796 "thread": "nvmf_tgt_poll_group_000", 00:09:07.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:07.796 "listen_address": { 00:09:07.796 "trtype": "TCP", 00:09:07.796 "adrfam": "IPv4", 00:09:07.796 "traddr": "10.0.0.3", 00:09:07.796 "trsvcid": "4420" 00:09:07.796 }, 00:09:07.796 "peer_address": { 00:09:07.796 "trtype": "TCP", 00:09:07.796 "adrfam": "IPv4", 00:09:07.796 "traddr": "10.0.0.1", 00:09:07.796 "trsvcid": "48610" 00:09:07.796 }, 00:09:07.796 "auth": { 00:09:07.796 "state": "completed", 00:09:07.796 "digest": "sha256", 00:09:07.796 "dhgroup": "ffdhe2048" 00:09:07.796 } 00:09:07.796 } 00:09:07.796 ]' 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:07.796 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:08.058 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:08.058 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:08.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.992 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:08.993 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:08.993 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:09.251 00:09:09.251 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:09.251 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:09.252 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:09.512 { 00:09:09.512 "cntlid": 13, 00:09:09.512 "qid": 0, 00:09:09.512 "state": "enabled", 00:09:09.512 "thread": "nvmf_tgt_poll_group_000", 00:09:09.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:09.512 "listen_address": { 00:09:09.512 "trtype": "TCP", 00:09:09.512 "adrfam": "IPv4", 00:09:09.512 "traddr": "10.0.0.3", 00:09:09.512 "trsvcid": "4420" 00:09:09.512 }, 00:09:09.512 "peer_address": { 00:09:09.512 "trtype": "TCP", 00:09:09.512 "adrfam": "IPv4", 00:09:09.512 "traddr": "10.0.0.1", 00:09:09.512 "trsvcid": "48616" 00:09:09.512 }, 00:09:09.512 "auth": { 00:09:09.512 "state": "completed", 00:09:09.512 "digest": "sha256", 00:09:09.512 "dhgroup": "ffdhe2048" 00:09:09.512 } 00:09:09.512 } 00:09:09.512 ]' 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:09.512 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:09.513 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:09.513 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:09.513 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:09.513 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:09.772 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:09.772 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:10.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:10.396 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:10.677 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:10.936 00:09:10.936 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:10.936 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:10.936 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:11.194 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:11.195 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:11.195 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.195 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.195 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.195 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:11.195 { 00:09:11.195 "cntlid": 15, 00:09:11.195 "qid": 0, 00:09:11.195 "state": "enabled", 00:09:11.195 "thread": "nvmf_tgt_poll_group_000", 00:09:11.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:11.195 "listen_address": { 00:09:11.195 "trtype": "TCP", 00:09:11.195 "adrfam": "IPv4", 00:09:11.195 "traddr": "10.0.0.3", 00:09:11.195 "trsvcid": "4420" 00:09:11.195 }, 00:09:11.195 "peer_address": { 00:09:11.195 "trtype": "TCP", 00:09:11.195 "adrfam": "IPv4", 00:09:11.195 "traddr": "10.0.0.1", 00:09:11.195 "trsvcid": "60588" 00:09:11.195 }, 00:09:11.195 "auth": { 00:09:11.195 "state": "completed", 00:09:11.195 "digest": "sha256", 00:09:11.195 "dhgroup": "ffdhe2048" 00:09:11.195 } 00:09:11.195 } 00:09:11.195 ]' 00:09:11.195 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:11.453 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:11.453 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:11.453 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:11.453 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:11.453 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:11.453 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:11.453 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:11.713 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:11.713 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:12.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:12.316 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:12.577 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:12.838 00:09:12.838 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:12.838 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:12.838 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:13.096 { 00:09:13.096 "cntlid": 17, 00:09:13.096 "qid": 0, 00:09:13.096 "state": "enabled", 00:09:13.096 "thread": "nvmf_tgt_poll_group_000", 00:09:13.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:13.096 "listen_address": { 00:09:13.096 "trtype": "TCP", 00:09:13.096 "adrfam": "IPv4", 00:09:13.096 "traddr": "10.0.0.3", 00:09:13.096 "trsvcid": "4420" 00:09:13.096 }, 00:09:13.096 "peer_address": { 00:09:13.096 "trtype": "TCP", 00:09:13.096 "adrfam": "IPv4", 00:09:13.096 "traddr": "10.0.0.1", 00:09:13.096 "trsvcid": "60618" 00:09:13.096 }, 00:09:13.096 "auth": { 00:09:13.096 "state": "completed", 00:09:13.096 "digest": "sha256", 00:09:13.096 "dhgroup": "ffdhe3072" 00:09:13.096 } 00:09:13.096 } 00:09:13.096 ]' 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:13.096 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:13.354 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:13.355 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:14.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:14.293 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:14.550 00:09:14.550 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:14.550 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:14.550 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:14.808 { 00:09:14.808 "cntlid": 19, 00:09:14.808 "qid": 0, 00:09:14.808 "state": "enabled", 00:09:14.808 "thread": "nvmf_tgt_poll_group_000", 00:09:14.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:14.808 "listen_address": { 00:09:14.808 "trtype": "TCP", 00:09:14.808 "adrfam": "IPv4", 00:09:14.808 "traddr": "10.0.0.3", 00:09:14.808 "trsvcid": "4420" 00:09:14.808 }, 00:09:14.808 "peer_address": { 00:09:14.808 "trtype": "TCP", 00:09:14.808 "adrfam": "IPv4", 00:09:14.808 "traddr": "10.0.0.1", 00:09:14.808 "trsvcid": "60658" 00:09:14.808 }, 00:09:14.808 "auth": { 00:09:14.808 "state": "completed", 00:09:14.808 "digest": "sha256", 00:09:14.808 "dhgroup": "ffdhe3072" 00:09:14.808 } 00:09:14.808 } 00:09:14.808 ]' 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:14.808 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:14.809 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:14.809 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:14.809 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:14.809 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:15.116 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:15.116 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:16.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:16.048 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:16.306 00:09:16.306 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:16.306 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:16.306 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:16.565 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:16.565 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:16.565 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.565 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.565 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.565 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:16.565 { 00:09:16.565 "cntlid": 21, 00:09:16.565 "qid": 0, 00:09:16.565 "state": "enabled", 00:09:16.565 "thread": "nvmf_tgt_poll_group_000", 00:09:16.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:16.565 "listen_address": { 00:09:16.565 "trtype": "TCP", 00:09:16.565 "adrfam": "IPv4", 00:09:16.565 "traddr": "10.0.0.3", 00:09:16.565 "trsvcid": "4420" 00:09:16.565 }, 00:09:16.565 "peer_address": { 00:09:16.565 "trtype": "TCP", 00:09:16.565 "adrfam": "IPv4", 00:09:16.565 "traddr": "10.0.0.1", 00:09:16.565 "trsvcid": "60692" 00:09:16.565 }, 00:09:16.565 "auth": { 00:09:16.565 "state": "completed", 00:09:16.565 "digest": "sha256", 00:09:16.565 "dhgroup": "ffdhe3072" 00:09:16.565 } 00:09:16.565 } 00:09:16.565 ]' 00:09:16.565 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:16.824 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:16.824 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:16.824 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:16.824 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:16.824 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:16.824 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:16.824 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:17.082 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:17.082 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:17.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:17.646 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:17.905 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:18.163 00:09:18.163 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:18.163 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:18.163 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:18.421 { 00:09:18.421 "cntlid": 23, 00:09:18.421 "qid": 0, 00:09:18.421 "state": "enabled", 00:09:18.421 "thread": "nvmf_tgt_poll_group_000", 00:09:18.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:18.421 "listen_address": { 00:09:18.421 "trtype": "TCP", 00:09:18.421 "adrfam": "IPv4", 00:09:18.421 "traddr": "10.0.0.3", 00:09:18.421 "trsvcid": "4420" 00:09:18.421 }, 00:09:18.421 "peer_address": { 00:09:18.421 "trtype": "TCP", 00:09:18.421 "adrfam": "IPv4", 00:09:18.421 "traddr": "10.0.0.1", 00:09:18.421 "trsvcid": "60718" 00:09:18.421 }, 00:09:18.421 "auth": { 00:09:18.421 "state": "completed", 00:09:18.421 "digest": "sha256", 00:09:18.421 "dhgroup": "ffdhe3072" 00:09:18.421 } 00:09:18.421 } 00:09:18.421 ]' 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:18.421 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:18.679 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:18.679 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:18.679 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:18.679 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:18.679 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:19.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:19.243 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:19.501 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:19.814 00:09:19.814 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:19.814 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:19.814 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:20.072 { 00:09:20.072 "cntlid": 25, 00:09:20.072 "qid": 0, 00:09:20.072 "state": "enabled", 00:09:20.072 "thread": "nvmf_tgt_poll_group_000", 00:09:20.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:20.072 "listen_address": { 00:09:20.072 "trtype": "TCP", 00:09:20.072 "adrfam": "IPv4", 00:09:20.072 "traddr": "10.0.0.3", 00:09:20.072 "trsvcid": "4420" 00:09:20.072 }, 00:09:20.072 "peer_address": { 00:09:20.072 "trtype": "TCP", 00:09:20.072 "adrfam": "IPv4", 00:09:20.072 "traddr": "10.0.0.1", 00:09:20.072 "trsvcid": "34970" 00:09:20.072 }, 00:09:20.072 "auth": { 00:09:20.072 "state": "completed", 00:09:20.072 "digest": "sha256", 00:09:20.072 "dhgroup": "ffdhe4096" 00:09:20.072 } 00:09:20.072 } 00:09:20.072 ]' 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:20.072 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:20.330 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:20.330 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:20.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:20.896 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.154 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.412 00:09:21.412 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:21.412 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:21.412 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:21.670 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:21.670 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:21.670 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.670 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.670 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.670 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:21.670 { 00:09:21.670 "cntlid": 27, 00:09:21.670 "qid": 0, 00:09:21.670 "state": "enabled", 00:09:21.670 "thread": "nvmf_tgt_poll_group_000", 00:09:21.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:21.670 "listen_address": { 00:09:21.670 "trtype": "TCP", 00:09:21.670 "adrfam": "IPv4", 00:09:21.670 "traddr": "10.0.0.3", 00:09:21.670 "trsvcid": "4420" 00:09:21.670 }, 00:09:21.670 "peer_address": { 00:09:21.670 "trtype": "TCP", 00:09:21.670 "adrfam": "IPv4", 00:09:21.670 "traddr": "10.0.0.1", 00:09:21.670 "trsvcid": "34996" 00:09:21.670 }, 00:09:21.671 "auth": { 00:09:21.671 "state": "completed", 00:09:21.671 "digest": "sha256", 00:09:21.671 "dhgroup": "ffdhe4096" 00:09:21.671 } 00:09:21.671 } 00:09:21.671 ]' 00:09:21.671 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:21.671 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:21.671 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:21.671 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:21.671 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:21.671 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:21.671 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:21.671 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:21.928 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:21.928 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:22.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:22.495 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:22.495 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:23.062 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:23.062 { 00:09:23.062 "cntlid": 29, 00:09:23.062 "qid": 0, 00:09:23.062 "state": "enabled", 00:09:23.062 "thread": "nvmf_tgt_poll_group_000", 00:09:23.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:23.062 "listen_address": { 00:09:23.062 "trtype": "TCP", 00:09:23.062 "adrfam": "IPv4", 00:09:23.062 "traddr": "10.0.0.3", 00:09:23.062 "trsvcid": "4420" 00:09:23.062 }, 00:09:23.062 "peer_address": { 00:09:23.062 "trtype": "TCP", 00:09:23.062 "adrfam": "IPv4", 00:09:23.062 "traddr": "10.0.0.1", 00:09:23.062 "trsvcid": "35030" 00:09:23.062 }, 00:09:23.062 "auth": { 00:09:23.062 "state": "completed", 00:09:23.062 "digest": "sha256", 00:09:23.062 "dhgroup": "ffdhe4096" 00:09:23.062 } 00:09:23.062 } 00:09:23.062 ]' 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:23.062 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:23.321 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:23.321 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:23.321 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:23.321 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:23.321 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:23.321 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:23.321 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:24.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:24.066 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:24.324 00:09:24.324 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:24.324 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:24.324 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:24.583 { 00:09:24.583 "cntlid": 31, 00:09:24.583 "qid": 0, 00:09:24.583 "state": "enabled", 00:09:24.583 "thread": "nvmf_tgt_poll_group_000", 00:09:24.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:24.583 "listen_address": { 00:09:24.583 "trtype": "TCP", 00:09:24.583 "adrfam": "IPv4", 00:09:24.583 "traddr": "10.0.0.3", 00:09:24.583 "trsvcid": "4420" 00:09:24.583 }, 00:09:24.583 "peer_address": { 00:09:24.583 "trtype": "TCP", 00:09:24.583 "adrfam": "IPv4", 00:09:24.583 "traddr": "10.0.0.1", 00:09:24.583 "trsvcid": "35048" 00:09:24.583 }, 00:09:24.583 "auth": { 00:09:24.583 "state": "completed", 00:09:24.583 "digest": "sha256", 00:09:24.583 "dhgroup": "ffdhe4096" 00:09:24.583 } 00:09:24.583 } 00:09:24.583 ]' 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:24.583 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:24.867 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:24.867 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:24.867 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:25.126 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:25.126 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:25.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:25.692 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:25.692 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:26.257 00:09:26.257 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:26.257 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:26.257 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:26.515 { 00:09:26.515 "cntlid": 33, 00:09:26.515 "qid": 0, 00:09:26.515 "state": "enabled", 00:09:26.515 "thread": "nvmf_tgt_poll_group_000", 00:09:26.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:26.515 "listen_address": { 00:09:26.515 "trtype": "TCP", 00:09:26.515 "adrfam": "IPv4", 00:09:26.515 "traddr": "10.0.0.3", 00:09:26.515 "trsvcid": "4420" 00:09:26.515 }, 00:09:26.515 "peer_address": { 00:09:26.515 "trtype": "TCP", 00:09:26.515 "adrfam": "IPv4", 00:09:26.515 "traddr": "10.0.0.1", 00:09:26.515 "trsvcid": "35090" 00:09:26.515 }, 00:09:26.515 "auth": { 00:09:26.515 "state": "completed", 00:09:26.515 "digest": "sha256", 00:09:26.515 "dhgroup": "ffdhe6144" 00:09:26.515 } 00:09:26.515 } 00:09:26.515 ]' 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:26.515 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:26.774 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:26.774 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:27.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:27.340 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:27.598 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:27.857 00:09:27.857 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:27.857 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:27.857 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:28.115 { 00:09:28.115 "cntlid": 35, 00:09:28.115 "qid": 0, 00:09:28.115 "state": "enabled", 00:09:28.115 "thread": "nvmf_tgt_poll_group_000", 00:09:28.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:28.115 "listen_address": { 00:09:28.115 "trtype": "TCP", 00:09:28.115 "adrfam": "IPv4", 00:09:28.115 "traddr": "10.0.0.3", 00:09:28.115 "trsvcid": "4420" 00:09:28.115 }, 00:09:28.115 "peer_address": { 00:09:28.115 "trtype": "TCP", 00:09:28.115 "adrfam": "IPv4", 00:09:28.115 "traddr": "10.0.0.1", 00:09:28.115 "trsvcid": "35118" 00:09:28.115 }, 00:09:28.115 "auth": { 00:09:28.115 "state": "completed", 00:09:28.115 "digest": "sha256", 00:09:28.115 "dhgroup": "ffdhe6144" 00:09:28.115 } 00:09:28.115 } 00:09:28.115 ]' 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:28.115 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:28.372 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:28.372 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:28.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:28.938 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:29.504 00:09:29.504 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:29.504 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:29.504 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:29.762 { 00:09:29.762 "cntlid": 37, 00:09:29.762 "qid": 0, 00:09:29.762 "state": "enabled", 00:09:29.762 "thread": "nvmf_tgt_poll_group_000", 00:09:29.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:29.762 "listen_address": { 00:09:29.762 "trtype": "TCP", 00:09:29.762 "adrfam": "IPv4", 00:09:29.762 "traddr": "10.0.0.3", 00:09:29.762 "trsvcid": "4420" 00:09:29.762 }, 00:09:29.762 "peer_address": { 00:09:29.762 "trtype": "TCP", 00:09:29.762 "adrfam": "IPv4", 00:09:29.762 "traddr": "10.0.0.1", 00:09:29.762 "trsvcid": "35162" 00:09:29.762 }, 00:09:29.762 "auth": { 00:09:29.762 "state": "completed", 00:09:29.762 "digest": "sha256", 00:09:29.762 "dhgroup": "ffdhe6144" 00:09:29.762 } 00:09:29.762 } 00:09:29.762 ]' 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:29.762 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:30.019 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:30.019 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:30.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:30.655 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.655 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.938 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.938 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:30.938 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:30.938 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:31.197 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:31.197 { 00:09:31.197 "cntlid": 39, 00:09:31.197 "qid": 0, 00:09:31.197 "state": "enabled", 00:09:31.197 "thread": "nvmf_tgt_poll_group_000", 00:09:31.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:31.197 "listen_address": { 00:09:31.197 "trtype": "TCP", 00:09:31.197 "adrfam": "IPv4", 00:09:31.197 "traddr": "10.0.0.3", 00:09:31.197 "trsvcid": "4420" 00:09:31.197 }, 00:09:31.197 "peer_address": { 00:09:31.197 "trtype": "TCP", 00:09:31.197 "adrfam": "IPv4", 00:09:31.197 "traddr": "10.0.0.1", 00:09:31.197 "trsvcid": "45372" 00:09:31.197 }, 00:09:31.197 "auth": { 00:09:31.197 "state": "completed", 00:09:31.197 "digest": "sha256", 00:09:31.197 "dhgroup": "ffdhe6144" 00:09:31.197 } 00:09:31.197 } 00:09:31.197 ]' 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:31.197 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:31.456 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:31.456 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:31.456 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:31.456 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:31.456 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:31.715 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:31.715 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:32.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:32.282 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:32.848 00:09:32.848 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:32.848 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:32.848 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:33.107 { 00:09:33.107 "cntlid": 41, 00:09:33.107 "qid": 0, 00:09:33.107 "state": "enabled", 00:09:33.107 "thread": "nvmf_tgt_poll_group_000", 00:09:33.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:33.107 "listen_address": { 00:09:33.107 "trtype": "TCP", 00:09:33.107 "adrfam": "IPv4", 00:09:33.107 "traddr": "10.0.0.3", 00:09:33.107 "trsvcid": "4420" 00:09:33.107 }, 00:09:33.107 "peer_address": { 00:09:33.107 "trtype": "TCP", 00:09:33.107 "adrfam": "IPv4", 00:09:33.107 "traddr": "10.0.0.1", 00:09:33.107 "trsvcid": "45404" 00:09:33.107 }, 00:09:33.107 "auth": { 00:09:33.107 "state": "completed", 00:09:33.107 "digest": "sha256", 00:09:33.107 "dhgroup": "ffdhe8192" 00:09:33.107 } 00:09:33.107 } 00:09:33.107 ]' 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:33.107 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:33.366 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:33.366 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:33.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:33.932 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:34.190 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:09:34.190 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:34.190 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:34.190 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:34.191 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:34.766 00:09:34.766 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:34.766 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:34.766 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:34.766 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:34.766 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:35.024 { 00:09:35.024 "cntlid": 43, 00:09:35.024 "qid": 0, 00:09:35.024 "state": "enabled", 00:09:35.024 "thread": "nvmf_tgt_poll_group_000", 00:09:35.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:35.024 "listen_address": { 00:09:35.024 "trtype": "TCP", 00:09:35.024 "adrfam": "IPv4", 00:09:35.024 "traddr": "10.0.0.3", 00:09:35.024 "trsvcid": "4420" 00:09:35.024 }, 00:09:35.024 "peer_address": { 00:09:35.024 "trtype": "TCP", 00:09:35.024 "adrfam": "IPv4", 00:09:35.024 "traddr": "10.0.0.1", 00:09:35.024 "trsvcid": "45438" 00:09:35.024 }, 00:09:35.024 "auth": { 00:09:35.024 "state": "completed", 00:09:35.024 "digest": "sha256", 00:09:35.024 "dhgroup": "ffdhe8192" 00:09:35.024 } 00:09:35.024 } 00:09:35.024 ]' 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:35.024 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:35.282 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:35.282 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:35.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:35.849 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:36.105 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:36.670 00:09:36.670 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:36.670 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:36.670 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:36.670 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:36.670 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:36.670 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.670 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.670 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.670 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:36.670 { 00:09:36.670 "cntlid": 45, 00:09:36.670 "qid": 0, 00:09:36.670 "state": "enabled", 00:09:36.670 "thread": "nvmf_tgt_poll_group_000", 00:09:36.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:36.670 "listen_address": { 00:09:36.670 "trtype": "TCP", 00:09:36.670 "adrfam": "IPv4", 00:09:36.670 "traddr": "10.0.0.3", 00:09:36.670 "trsvcid": "4420" 00:09:36.670 }, 00:09:36.670 "peer_address": { 00:09:36.670 "trtype": "TCP", 00:09:36.670 "adrfam": "IPv4", 00:09:36.670 "traddr": "10.0.0.1", 00:09:36.670 "trsvcid": "45468" 00:09:36.670 }, 00:09:36.670 "auth": { 00:09:36.670 "state": "completed", 00:09:36.670 "digest": "sha256", 00:09:36.670 "dhgroup": "ffdhe8192" 00:09:36.670 } 00:09:36.670 } 00:09:36.670 ]' 00:09:36.670 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:36.929 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:37.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:37.494 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:37.752 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:38.316 00:09:38.316 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:38.316 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:38.316 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:38.316 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:38.316 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:38.316 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.316 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:38.574 { 00:09:38.574 "cntlid": 47, 00:09:38.574 "qid": 0, 00:09:38.574 "state": "enabled", 00:09:38.574 "thread": "nvmf_tgt_poll_group_000", 00:09:38.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:38.574 "listen_address": { 00:09:38.574 "trtype": "TCP", 00:09:38.574 "adrfam": "IPv4", 00:09:38.574 "traddr": "10.0.0.3", 00:09:38.574 "trsvcid": "4420" 00:09:38.574 }, 00:09:38.574 "peer_address": { 00:09:38.574 "trtype": "TCP", 00:09:38.574 "adrfam": "IPv4", 00:09:38.574 "traddr": "10.0.0.1", 00:09:38.574 "trsvcid": "45502" 00:09:38.574 }, 00:09:38.574 "auth": { 00:09:38.574 "state": "completed", 00:09:38.574 "digest": "sha256", 00:09:38.574 "dhgroup": "ffdhe8192" 00:09:38.574 } 00:09:38.574 } 00:09:38.574 ]' 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:38.574 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:38.575 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:38.575 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:38.831 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:38.831 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:39.397 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.655 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.913 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:39.913 { 00:09:39.913 "cntlid": 49, 00:09:39.913 "qid": 0, 00:09:39.913 "state": "enabled", 00:09:39.913 "thread": "nvmf_tgt_poll_group_000", 00:09:39.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:39.913 "listen_address": { 00:09:39.913 "trtype": "TCP", 00:09:39.913 "adrfam": "IPv4", 00:09:39.913 "traddr": "10.0.0.3", 00:09:39.913 "trsvcid": "4420" 00:09:39.913 }, 00:09:39.913 "peer_address": { 00:09:39.913 "trtype": "TCP", 00:09:39.913 "adrfam": "IPv4", 00:09:39.913 "traddr": "10.0.0.1", 00:09:39.913 "trsvcid": "47080" 00:09:39.913 }, 00:09:39.913 "auth": { 00:09:39.913 "state": "completed", 00:09:39.913 "digest": "sha384", 00:09:39.913 "dhgroup": "null" 00:09:39.913 } 00:09:39.913 } 00:09:39.913 ]' 00:09:39.913 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:40.169 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:40.169 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:40.169 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:40.169 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:40.169 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:40.169 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:40.169 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:40.426 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:40.426 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:40.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:40.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:41.251 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:41.251 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:41.509 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:41.509 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:41.509 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.509 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.509 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.509 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:41.509 { 00:09:41.509 "cntlid": 51, 00:09:41.509 "qid": 0, 00:09:41.509 "state": "enabled", 00:09:41.509 "thread": "nvmf_tgt_poll_group_000", 00:09:41.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:41.509 "listen_address": { 00:09:41.509 "trtype": "TCP", 00:09:41.509 "adrfam": "IPv4", 00:09:41.509 "traddr": "10.0.0.3", 00:09:41.509 "trsvcid": "4420" 00:09:41.509 }, 00:09:41.509 "peer_address": { 00:09:41.509 "trtype": "TCP", 00:09:41.509 "adrfam": "IPv4", 00:09:41.509 "traddr": "10.0.0.1", 00:09:41.509 "trsvcid": "47110" 00:09:41.509 }, 00:09:41.509 "auth": { 00:09:41.509 "state": "completed", 00:09:41.509 "digest": "sha384", 00:09:41.509 "dhgroup": "null" 00:09:41.509 } 00:09:41.509 } 00:09:41.509 ]' 00:09:41.509 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:41.509 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:41.509 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:41.767 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:41.767 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:41.767 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:41.767 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:41.767 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.767 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:41.767 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:42.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:42.333 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:42.591 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:42.592 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:42.852 00:09:42.852 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:42.852 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:42.852 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:43.111 { 00:09:43.111 "cntlid": 53, 00:09:43.111 "qid": 0, 00:09:43.111 "state": "enabled", 00:09:43.111 "thread": "nvmf_tgt_poll_group_000", 00:09:43.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:43.111 "listen_address": { 00:09:43.111 "trtype": "TCP", 00:09:43.111 "adrfam": "IPv4", 00:09:43.111 "traddr": "10.0.0.3", 00:09:43.111 "trsvcid": "4420" 00:09:43.111 }, 00:09:43.111 "peer_address": { 00:09:43.111 "trtype": "TCP", 00:09:43.111 "adrfam": "IPv4", 00:09:43.111 "traddr": "10.0.0.1", 00:09:43.111 "trsvcid": "47142" 00:09:43.111 }, 00:09:43.111 "auth": { 00:09:43.111 "state": "completed", 00:09:43.111 "digest": "sha384", 00:09:43.111 "dhgroup": "null" 00:09:43.111 } 00:09:43.111 } 00:09:43.111 ]' 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:43.111 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.369 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:43.369 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:43.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:43.936 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:44.194 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:44.451 00:09:44.451 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.451 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.451 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:44.710 { 00:09:44.710 "cntlid": 55, 00:09:44.710 "qid": 0, 00:09:44.710 "state": "enabled", 00:09:44.710 "thread": "nvmf_tgt_poll_group_000", 00:09:44.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:44.710 "listen_address": { 00:09:44.710 "trtype": "TCP", 00:09:44.710 "adrfam": "IPv4", 00:09:44.710 "traddr": "10.0.0.3", 00:09:44.710 "trsvcid": "4420" 00:09:44.710 }, 00:09:44.710 "peer_address": { 00:09:44.710 "trtype": "TCP", 00:09:44.710 "adrfam": "IPv4", 00:09:44.710 "traddr": "10.0.0.1", 00:09:44.710 "trsvcid": "47156" 00:09:44.710 }, 00:09:44.710 "auth": { 00:09:44.710 "state": "completed", 00:09:44.710 "digest": "sha384", 00:09:44.710 "dhgroup": "null" 00:09:44.710 } 00:09:44.710 } 00:09:44.710 ]' 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.710 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:44.968 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:44.968 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:45.584 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.843 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:46.102 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.102 { 00:09:46.102 "cntlid": 57, 00:09:46.102 "qid": 0, 00:09:46.102 "state": "enabled", 00:09:46.102 "thread": "nvmf_tgt_poll_group_000", 00:09:46.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:46.102 "listen_address": { 00:09:46.102 "trtype": "TCP", 00:09:46.102 "adrfam": "IPv4", 00:09:46.102 "traddr": "10.0.0.3", 00:09:46.102 "trsvcid": "4420" 00:09:46.102 }, 00:09:46.102 "peer_address": { 00:09:46.102 "trtype": "TCP", 00:09:46.102 "adrfam": "IPv4", 00:09:46.102 "traddr": "10.0.0.1", 00:09:46.102 "trsvcid": "47176" 00:09:46.102 }, 00:09:46.102 "auth": { 00:09:46.102 "state": "completed", 00:09:46.102 "digest": "sha384", 00:09:46.102 "dhgroup": "ffdhe2048" 00:09:46.102 } 00:09:46.102 } 00:09:46.102 ]' 00:09:46.102 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.360 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:46.360 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.360 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:46.360 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.360 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.360 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.360 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.618 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:46.618 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:47.184 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.442 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.443 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.443 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.443 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.443 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.701 00:09:47.701 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:47.701 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:47.701 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:47.959 { 00:09:47.959 "cntlid": 59, 00:09:47.959 "qid": 0, 00:09:47.959 "state": "enabled", 00:09:47.959 "thread": "nvmf_tgt_poll_group_000", 00:09:47.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:47.959 "listen_address": { 00:09:47.959 "trtype": "TCP", 00:09:47.959 "adrfam": "IPv4", 00:09:47.959 "traddr": "10.0.0.3", 00:09:47.959 "trsvcid": "4420" 00:09:47.959 }, 00:09:47.959 "peer_address": { 00:09:47.959 "trtype": "TCP", 00:09:47.959 "adrfam": "IPv4", 00:09:47.959 "traddr": "10.0.0.1", 00:09:47.959 "trsvcid": "47206" 00:09:47.959 }, 00:09:47.959 "auth": { 00:09:47.959 "state": "completed", 00:09:47.959 "digest": "sha384", 00:09:47.959 "dhgroup": "ffdhe2048" 00:09:47.959 } 00:09:47.959 } 00:09:47.959 ]' 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:47.959 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.218 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:48.218 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:48.783 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.783 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:48.783 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.783 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.784 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.784 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.784 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:48.784 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.061 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.320 00:09:49.320 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.320 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.320 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:49.578 { 00:09:49.578 "cntlid": 61, 00:09:49.578 "qid": 0, 00:09:49.578 "state": "enabled", 00:09:49.578 "thread": "nvmf_tgt_poll_group_000", 00:09:49.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:49.578 "listen_address": { 00:09:49.578 "trtype": "TCP", 00:09:49.578 "adrfam": "IPv4", 00:09:49.578 "traddr": "10.0.0.3", 00:09:49.578 "trsvcid": "4420" 00:09:49.578 }, 00:09:49.578 "peer_address": { 00:09:49.578 "trtype": "TCP", 00:09:49.578 "adrfam": "IPv4", 00:09:49.578 "traddr": "10.0.0.1", 00:09:49.578 "trsvcid": "47234" 00:09:49.578 }, 00:09:49.578 "auth": { 00:09:49.578 "state": "completed", 00:09:49.578 "digest": "sha384", 00:09:49.578 "dhgroup": "ffdhe2048" 00:09:49.578 } 00:09:49.578 } 00:09:49.578 ]' 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:49.578 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:49.836 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:49.836 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:50.400 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.400 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:50.400 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.400 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.400 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.400 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:50.401 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:50.401 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:50.658 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:50.916 00:09:50.916 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.916 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:50.916 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:51.174 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.174 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.174 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.174 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.174 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.174 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:51.174 { 00:09:51.174 "cntlid": 63, 00:09:51.174 "qid": 0, 00:09:51.174 "state": "enabled", 00:09:51.174 "thread": "nvmf_tgt_poll_group_000", 00:09:51.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:51.174 "listen_address": { 00:09:51.174 "trtype": "TCP", 00:09:51.174 "adrfam": "IPv4", 00:09:51.174 "traddr": "10.0.0.3", 00:09:51.174 "trsvcid": "4420" 00:09:51.174 }, 00:09:51.174 "peer_address": { 00:09:51.174 "trtype": "TCP", 00:09:51.174 "adrfam": "IPv4", 00:09:51.174 "traddr": "10.0.0.1", 00:09:51.174 "trsvcid": "56442" 00:09:51.174 }, 00:09:51.174 "auth": { 00:09:51.174 "state": "completed", 00:09:51.174 "digest": "sha384", 00:09:51.174 "dhgroup": "ffdhe2048" 00:09:51.174 } 00:09:51.174 } 00:09:51.174 ]' 00:09:51.174 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:51.175 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:51.175 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:51.175 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:51.175 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:51.175 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:51.175 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:51.175 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.432 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:51.432 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:51.998 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.256 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.512 00:09:52.512 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.512 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.512 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.768 { 00:09:52.768 "cntlid": 65, 00:09:52.768 "qid": 0, 00:09:52.768 "state": "enabled", 00:09:52.768 "thread": "nvmf_tgt_poll_group_000", 00:09:52.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:52.768 "listen_address": { 00:09:52.768 "trtype": "TCP", 00:09:52.768 "adrfam": "IPv4", 00:09:52.768 "traddr": "10.0.0.3", 00:09:52.768 "trsvcid": "4420" 00:09:52.768 }, 00:09:52.768 "peer_address": { 00:09:52.768 "trtype": "TCP", 00:09:52.768 "adrfam": "IPv4", 00:09:52.768 "traddr": "10.0.0.1", 00:09:52.768 "trsvcid": "56470" 00:09:52.768 }, 00:09:52.768 "auth": { 00:09:52.768 "state": "completed", 00:09:52.768 "digest": "sha384", 00:09:52.768 "dhgroup": "ffdhe3072" 00:09:52.768 } 00:09:52.768 } 00:09:52.768 ]' 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.768 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.024 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:53.024 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:53.589 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.589 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:53.589 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.589 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.589 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.589 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.589 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:53.589 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.847 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:54.105 00:09:54.105 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.105 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.105 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.362 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.362 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.362 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.362 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.362 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.362 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.362 { 00:09:54.362 "cntlid": 67, 00:09:54.362 "qid": 0, 00:09:54.362 "state": "enabled", 00:09:54.362 "thread": "nvmf_tgt_poll_group_000", 00:09:54.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:54.362 "listen_address": { 00:09:54.362 "trtype": "TCP", 00:09:54.362 "adrfam": "IPv4", 00:09:54.362 "traddr": "10.0.0.3", 00:09:54.362 "trsvcid": "4420" 00:09:54.362 }, 00:09:54.363 "peer_address": { 00:09:54.363 "trtype": "TCP", 00:09:54.363 "adrfam": "IPv4", 00:09:54.363 "traddr": "10.0.0.1", 00:09:54.363 "trsvcid": "56494" 00:09:54.363 }, 00:09:54.363 "auth": { 00:09:54.363 "state": "completed", 00:09:54.363 "digest": "sha384", 00:09:54.363 "dhgroup": "ffdhe3072" 00:09:54.363 } 00:09:54.363 } 00:09:54.363 ]' 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.363 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.620 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:54.620 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:55.184 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:55.443 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:09:55.443 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.443 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:55.443 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.444 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.702 00:09:55.702 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.702 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.702 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:55.960 { 00:09:55.960 "cntlid": 69, 00:09:55.960 "qid": 0, 00:09:55.960 "state": "enabled", 00:09:55.960 "thread": "nvmf_tgt_poll_group_000", 00:09:55.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:55.960 "listen_address": { 00:09:55.960 "trtype": "TCP", 00:09:55.960 "adrfam": "IPv4", 00:09:55.960 "traddr": "10.0.0.3", 00:09:55.960 "trsvcid": "4420" 00:09:55.960 }, 00:09:55.960 "peer_address": { 00:09:55.960 "trtype": "TCP", 00:09:55.960 "adrfam": "IPv4", 00:09:55.960 "traddr": "10.0.0.1", 00:09:55.960 "trsvcid": "56508" 00:09:55.960 }, 00:09:55.960 "auth": { 00:09:55.960 "state": "completed", 00:09:55.960 "digest": "sha384", 00:09:55.960 "dhgroup": "ffdhe3072" 00:09:55.960 } 00:09:55.960 } 00:09:55.960 ]' 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.960 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.219 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:56.219 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:56.788 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:57.047 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:57.305 00:09:57.306 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.306 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.306 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.564 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.564 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.564 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.564 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.564 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.564 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.564 { 00:09:57.564 "cntlid": 71, 00:09:57.564 "qid": 0, 00:09:57.564 "state": "enabled", 00:09:57.564 "thread": "nvmf_tgt_poll_group_000", 00:09:57.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:57.564 "listen_address": { 00:09:57.564 "trtype": "TCP", 00:09:57.565 "adrfam": "IPv4", 00:09:57.565 "traddr": "10.0.0.3", 00:09:57.565 "trsvcid": "4420" 00:09:57.565 }, 00:09:57.565 "peer_address": { 00:09:57.565 "trtype": "TCP", 00:09:57.565 "adrfam": "IPv4", 00:09:57.565 "traddr": "10.0.0.1", 00:09:57.565 "trsvcid": "56532" 00:09:57.565 }, 00:09:57.565 "auth": { 00:09:57.565 "state": "completed", 00:09:57.565 "digest": "sha384", 00:09:57.565 "dhgroup": "ffdhe3072" 00:09:57.565 } 00:09:57.565 } 00:09:57.565 ]' 00:09:57.565 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.565 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:57.565 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.822 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:57.822 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.822 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.822 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.822 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.822 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:57.822 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:58.756 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.756 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.013 00:09:59.013 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.013 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.013 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.272 { 00:09:59.272 "cntlid": 73, 00:09:59.272 "qid": 0, 00:09:59.272 "state": "enabled", 00:09:59.272 "thread": "nvmf_tgt_poll_group_000", 00:09:59.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:09:59.272 "listen_address": { 00:09:59.272 "trtype": "TCP", 00:09:59.272 "adrfam": "IPv4", 00:09:59.272 "traddr": "10.0.0.3", 00:09:59.272 "trsvcid": "4420" 00:09:59.272 }, 00:09:59.272 "peer_address": { 00:09:59.272 "trtype": "TCP", 00:09:59.272 "adrfam": "IPv4", 00:09:59.272 "traddr": "10.0.0.1", 00:09:59.272 "trsvcid": "56574" 00:09:59.272 }, 00:09:59.272 "auth": { 00:09:59.272 "state": "completed", 00:09:59.272 "digest": "sha384", 00:09:59.272 "dhgroup": "ffdhe4096" 00:09:59.272 } 00:09:59.272 } 00:09:59.272 ]' 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:59.272 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.529 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:59.529 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.529 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.529 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.529 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.786 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:09:59.786 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:00.353 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.354 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.354 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.354 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:00.354 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:00.354 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:00.612 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:00.870 { 00:10:00.870 "cntlid": 75, 00:10:00.870 "qid": 0, 00:10:00.870 "state": "enabled", 00:10:00.870 "thread": "nvmf_tgt_poll_group_000", 00:10:00.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:00.870 "listen_address": { 00:10:00.870 "trtype": "TCP", 00:10:00.870 "adrfam": "IPv4", 00:10:00.870 "traddr": "10.0.0.3", 00:10:00.870 "trsvcid": "4420" 00:10:00.870 }, 00:10:00.870 "peer_address": { 00:10:00.870 "trtype": "TCP", 00:10:00.870 "adrfam": "IPv4", 00:10:00.870 "traddr": "10.0.0.1", 00:10:00.870 "trsvcid": "49644" 00:10:00.870 }, 00:10:00.870 "auth": { 00:10:00.870 "state": "completed", 00:10:00.870 "digest": "sha384", 00:10:00.870 "dhgroup": "ffdhe4096" 00:10:00.870 } 00:10:00.870 } 00:10:00.870 ]' 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.870 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:01.128 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.128 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:01.128 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.128 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.128 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.128 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.386 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:01.386 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:01.952 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.211 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:02.212 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:02.212 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:02.469 00:10:02.469 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:02.469 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:02.469 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:02.810 { 00:10:02.810 "cntlid": 77, 00:10:02.810 "qid": 0, 00:10:02.810 "state": "enabled", 00:10:02.810 "thread": "nvmf_tgt_poll_group_000", 00:10:02.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:02.810 "listen_address": { 00:10:02.810 "trtype": "TCP", 00:10:02.810 "adrfam": "IPv4", 00:10:02.810 "traddr": "10.0.0.3", 00:10:02.810 "trsvcid": "4420" 00:10:02.810 }, 00:10:02.810 "peer_address": { 00:10:02.810 "trtype": "TCP", 00:10:02.810 "adrfam": "IPv4", 00:10:02.810 "traddr": "10.0.0.1", 00:10:02.810 "trsvcid": "49678" 00:10:02.810 }, 00:10:02.810 "auth": { 00:10:02.810 "state": "completed", 00:10:02.810 "digest": "sha384", 00:10:02.810 "dhgroup": "ffdhe4096" 00:10:02.810 } 00:10:02.810 } 00:10:02.810 ]' 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.810 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.068 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:03.068 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:03.638 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:03.638 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:04.209 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.209 { 00:10:04.209 "cntlid": 79, 00:10:04.209 "qid": 0, 00:10:04.209 "state": "enabled", 00:10:04.209 "thread": "nvmf_tgt_poll_group_000", 00:10:04.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:04.209 "listen_address": { 00:10:04.209 "trtype": "TCP", 00:10:04.209 "adrfam": "IPv4", 00:10:04.209 "traddr": "10.0.0.3", 00:10:04.209 "trsvcid": "4420" 00:10:04.209 }, 00:10:04.209 "peer_address": { 00:10:04.209 "trtype": "TCP", 00:10:04.209 "adrfam": "IPv4", 00:10:04.209 "traddr": "10.0.0.1", 00:10:04.209 "trsvcid": "49698" 00:10:04.209 }, 00:10:04.209 "auth": { 00:10:04.209 "state": "completed", 00:10:04.209 "digest": "sha384", 00:10:04.209 "dhgroup": "ffdhe4096" 00:10:04.209 } 00:10:04.209 } 00:10:04.209 ]' 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:04.209 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.470 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:04.470 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.470 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.470 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.470 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.470 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:04.470 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:05.042 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:05.303 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:05.914 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.914 { 00:10:05.914 "cntlid": 81, 00:10:05.914 "qid": 0, 00:10:05.914 "state": "enabled", 00:10:05.914 "thread": "nvmf_tgt_poll_group_000", 00:10:05.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:05.914 "listen_address": { 00:10:05.914 "trtype": "TCP", 00:10:05.914 "adrfam": "IPv4", 00:10:05.914 "traddr": "10.0.0.3", 00:10:05.914 "trsvcid": "4420" 00:10:05.914 }, 00:10:05.914 "peer_address": { 00:10:05.914 "trtype": "TCP", 00:10:05.914 "adrfam": "IPv4", 00:10:05.914 "traddr": "10.0.0.1", 00:10:05.914 "trsvcid": "49728" 00:10:05.914 }, 00:10:05.914 "auth": { 00:10:05.914 "state": "completed", 00:10:05.914 "digest": "sha384", 00:10:05.914 "dhgroup": "ffdhe6144" 00:10:05.914 } 00:10:05.914 } 00:10:05.914 ]' 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:05.914 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.175 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:06.175 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.175 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.175 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.176 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.176 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:06.176 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:06.741 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.999 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:07.565 00:10:07.565 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.565 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.565 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.565 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.565 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.565 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.565 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.565 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.565 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.565 { 00:10:07.565 "cntlid": 83, 00:10:07.565 "qid": 0, 00:10:07.565 "state": "enabled", 00:10:07.565 "thread": "nvmf_tgt_poll_group_000", 00:10:07.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:07.565 "listen_address": { 00:10:07.565 "trtype": "TCP", 00:10:07.565 "adrfam": "IPv4", 00:10:07.565 "traddr": "10.0.0.3", 00:10:07.565 "trsvcid": "4420" 00:10:07.565 }, 00:10:07.565 "peer_address": { 00:10:07.565 "trtype": "TCP", 00:10:07.565 "adrfam": "IPv4", 00:10:07.565 "traddr": "10.0.0.1", 00:10:07.565 "trsvcid": "49754" 00:10:07.565 }, 00:10:07.565 "auth": { 00:10:07.565 "state": "completed", 00:10:07.565 "digest": "sha384", 00:10:07.565 "dhgroup": "ffdhe6144" 00:10:07.565 } 00:10:07.565 } 00:10:07.565 ]' 00:10:07.565 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.823 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:07.823 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.823 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:07.823 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.823 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.823 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.823 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.080 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:08.080 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:08.646 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.906 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:09.166 00:10:09.166 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.166 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.166 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.424 { 00:10:09.424 "cntlid": 85, 00:10:09.424 "qid": 0, 00:10:09.424 "state": "enabled", 00:10:09.424 "thread": "nvmf_tgt_poll_group_000", 00:10:09.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:09.424 "listen_address": { 00:10:09.424 "trtype": "TCP", 00:10:09.424 "adrfam": "IPv4", 00:10:09.424 "traddr": "10.0.0.3", 00:10:09.424 "trsvcid": "4420" 00:10:09.424 }, 00:10:09.424 "peer_address": { 00:10:09.424 "trtype": "TCP", 00:10:09.424 "adrfam": "IPv4", 00:10:09.424 "traddr": "10.0.0.1", 00:10:09.424 "trsvcid": "49772" 00:10:09.424 }, 00:10:09.424 "auth": { 00:10:09.424 "state": "completed", 00:10:09.424 "digest": "sha384", 00:10:09.424 "dhgroup": "ffdhe6144" 00:10:09.424 } 00:10:09.424 } 00:10:09.424 ]' 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:09.424 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:09.682 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.682 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.682 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.682 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:09.682 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:10.247 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:10.504 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:10:10.504 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.504 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:10.504 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.505 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:11.070 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.070 { 00:10:11.070 "cntlid": 87, 00:10:11.070 "qid": 0, 00:10:11.070 "state": "enabled", 00:10:11.070 "thread": "nvmf_tgt_poll_group_000", 00:10:11.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:11.070 "listen_address": { 00:10:11.070 "trtype": "TCP", 00:10:11.070 "adrfam": "IPv4", 00:10:11.070 "traddr": "10.0.0.3", 00:10:11.070 "trsvcid": "4420" 00:10:11.070 }, 00:10:11.070 "peer_address": { 00:10:11.070 "trtype": "TCP", 00:10:11.070 "adrfam": "IPv4", 00:10:11.070 "traddr": "10.0.0.1", 00:10:11.070 "trsvcid": "55052" 00:10:11.070 }, 00:10:11.070 "auth": { 00:10:11.070 "state": "completed", 00:10:11.070 "digest": "sha384", 00:10:11.070 "dhgroup": "ffdhe6144" 00:10:11.070 } 00:10:11.070 } 00:10:11.070 ]' 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:11.070 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:11.329 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:11.329 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:11.329 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.329 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.329 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.329 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:11.329 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:11.896 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:12.157 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:10:12.157 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.157 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:12.157 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:12.157 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:12.157 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.158 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.158 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.158 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.158 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.158 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.158 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.158 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.730 00:10:12.730 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.730 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.730 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.991 { 00:10:12.991 "cntlid": 89, 00:10:12.991 "qid": 0, 00:10:12.991 "state": "enabled", 00:10:12.991 "thread": "nvmf_tgt_poll_group_000", 00:10:12.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:12.991 "listen_address": { 00:10:12.991 "trtype": "TCP", 00:10:12.991 "adrfam": "IPv4", 00:10:12.991 "traddr": "10.0.0.3", 00:10:12.991 "trsvcid": "4420" 00:10:12.991 }, 00:10:12.991 "peer_address": { 00:10:12.991 "trtype": "TCP", 00:10:12.991 "adrfam": "IPv4", 00:10:12.991 "traddr": "10.0.0.1", 00:10:12.991 "trsvcid": "55080" 00:10:12.991 }, 00:10:12.991 "auth": { 00:10:12.991 "state": "completed", 00:10:12.991 "digest": "sha384", 00:10:12.991 "dhgroup": "ffdhe8192" 00:10:12.991 } 00:10:12.991 } 00:10:12.991 ]' 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.991 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.252 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:13.253 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:13.874 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.135 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.707 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.707 { 00:10:14.707 "cntlid": 91, 00:10:14.707 "qid": 0, 00:10:14.707 "state": "enabled", 00:10:14.707 "thread": "nvmf_tgt_poll_group_000", 00:10:14.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:14.707 "listen_address": { 00:10:14.707 "trtype": "TCP", 00:10:14.707 "adrfam": "IPv4", 00:10:14.707 "traddr": "10.0.0.3", 00:10:14.707 "trsvcid": "4420" 00:10:14.707 }, 00:10:14.707 "peer_address": { 00:10:14.707 "trtype": "TCP", 00:10:14.707 "adrfam": "IPv4", 00:10:14.707 "traddr": "10.0.0.1", 00:10:14.707 "trsvcid": "55106" 00:10:14.707 }, 00:10:14.707 "auth": { 00:10:14.707 "state": "completed", 00:10:14.707 "digest": "sha384", 00:10:14.707 "dhgroup": "ffdhe8192" 00:10:14.707 } 00:10:14.707 } 00:10:14.707 ]' 00:10:14.707 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.968 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:14.968 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.968 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:14.968 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.968 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.968 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.968 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.229 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:15.229 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:15.800 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.061 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.632 00:10:16.632 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.632 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.632 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.632 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.632 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.632 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.632 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.632 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.632 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.632 { 00:10:16.632 "cntlid": 93, 00:10:16.632 "qid": 0, 00:10:16.632 "state": "enabled", 00:10:16.632 "thread": "nvmf_tgt_poll_group_000", 00:10:16.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:16.632 "listen_address": { 00:10:16.632 "trtype": "TCP", 00:10:16.632 "adrfam": "IPv4", 00:10:16.632 "traddr": "10.0.0.3", 00:10:16.632 "trsvcid": "4420" 00:10:16.632 }, 00:10:16.632 "peer_address": { 00:10:16.632 "trtype": "TCP", 00:10:16.632 "adrfam": "IPv4", 00:10:16.632 "traddr": "10.0.0.1", 00:10:16.632 "trsvcid": "55142" 00:10:16.632 }, 00:10:16.632 "auth": { 00:10:16.632 "state": "completed", 00:10:16.632 "digest": "sha384", 00:10:16.632 "dhgroup": "ffdhe8192" 00:10:16.633 } 00:10:16.633 } 00:10:16.633 ]' 00:10:16.633 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.633 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:16.633 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.633 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:16.633 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.891 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.891 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.891 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.891 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:16.891 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:17.489 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.489 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:17.489 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.489 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.489 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.489 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.489 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:17.489 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:17.750 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:18.320 00:10:18.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.580 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.580 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.580 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.580 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.580 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.580 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.580 { 00:10:18.580 "cntlid": 95, 00:10:18.580 "qid": 0, 00:10:18.580 "state": "enabled", 00:10:18.580 "thread": "nvmf_tgt_poll_group_000", 00:10:18.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:18.581 "listen_address": { 00:10:18.581 "trtype": "TCP", 00:10:18.581 "adrfam": "IPv4", 00:10:18.581 "traddr": "10.0.0.3", 00:10:18.581 "trsvcid": "4420" 00:10:18.581 }, 00:10:18.581 "peer_address": { 00:10:18.581 "trtype": "TCP", 00:10:18.581 "adrfam": "IPv4", 00:10:18.581 "traddr": "10.0.0.1", 00:10:18.581 "trsvcid": "55156" 00:10:18.581 }, 00:10:18.581 "auth": { 00:10:18.581 "state": "completed", 00:10:18.581 "digest": "sha384", 00:10:18.581 "dhgroup": "ffdhe8192" 00:10:18.581 } 00:10:18.581 } 00:10:18.581 ]' 00:10:18.581 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.581 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:18.581 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.581 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:18.581 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.581 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.581 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.581 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.840 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:18.840 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:19.411 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.672 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.959 00:10:19.959 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.959 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.959 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.221 { 00:10:20.221 "cntlid": 97, 00:10:20.221 "qid": 0, 00:10:20.221 "state": "enabled", 00:10:20.221 "thread": "nvmf_tgt_poll_group_000", 00:10:20.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:20.221 "listen_address": { 00:10:20.221 "trtype": "TCP", 00:10:20.221 "adrfam": "IPv4", 00:10:20.221 "traddr": "10.0.0.3", 00:10:20.221 "trsvcid": "4420" 00:10:20.221 }, 00:10:20.221 "peer_address": { 00:10:20.221 "trtype": "TCP", 00:10:20.221 "adrfam": "IPv4", 00:10:20.221 "traddr": "10.0.0.1", 00:10:20.221 "trsvcid": "56216" 00:10:20.221 }, 00:10:20.221 "auth": { 00:10:20.221 "state": "completed", 00:10:20.221 "digest": "sha512", 00:10:20.221 "dhgroup": "null" 00:10:20.221 } 00:10:20.221 } 00:10:20.221 ]' 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.221 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.483 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:20.483 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:21.056 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.316 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.576 00:10:21.576 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.576 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.576 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.837 { 00:10:21.837 "cntlid": 99, 00:10:21.837 "qid": 0, 00:10:21.837 "state": "enabled", 00:10:21.837 "thread": "nvmf_tgt_poll_group_000", 00:10:21.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:21.837 "listen_address": { 00:10:21.837 "trtype": "TCP", 00:10:21.837 "adrfam": "IPv4", 00:10:21.837 "traddr": "10.0.0.3", 00:10:21.837 "trsvcid": "4420" 00:10:21.837 }, 00:10:21.837 "peer_address": { 00:10:21.837 "trtype": "TCP", 00:10:21.837 "adrfam": "IPv4", 00:10:21.837 "traddr": "10.0.0.1", 00:10:21.837 "trsvcid": "56258" 00:10:21.837 }, 00:10:21.837 "auth": { 00:10:21.837 "state": "completed", 00:10:21.837 "digest": "sha512", 00:10:21.837 "dhgroup": "null" 00:10:21.837 } 00:10:21.837 } 00:10:21.837 ]' 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.837 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.097 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:22.097 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:22.666 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.927 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.187 00:10:23.187 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.187 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.187 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.187 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.447 { 00:10:23.447 "cntlid": 101, 00:10:23.447 "qid": 0, 00:10:23.447 "state": "enabled", 00:10:23.447 "thread": "nvmf_tgt_poll_group_000", 00:10:23.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:23.447 "listen_address": { 00:10:23.447 "trtype": "TCP", 00:10:23.447 "adrfam": "IPv4", 00:10:23.447 "traddr": "10.0.0.3", 00:10:23.447 "trsvcid": "4420" 00:10:23.447 }, 00:10:23.447 "peer_address": { 00:10:23.447 "trtype": "TCP", 00:10:23.447 "adrfam": "IPv4", 00:10:23.447 "traddr": "10.0.0.1", 00:10:23.447 "trsvcid": "56282" 00:10:23.447 }, 00:10:23.447 "auth": { 00:10:23.447 "state": "completed", 00:10:23.447 "digest": "sha512", 00:10:23.447 "dhgroup": "null" 00:10:23.447 } 00:10:23.447 } 00:10:23.447 ]' 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.447 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.708 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:23.708 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:24.278 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.539 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.540 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.540 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:24.540 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.540 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.800 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.800 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.800 { 00:10:24.800 "cntlid": 103, 00:10:24.800 "qid": 0, 00:10:24.800 "state": "enabled", 00:10:24.800 "thread": "nvmf_tgt_poll_group_000", 00:10:24.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:24.800 "listen_address": { 00:10:24.800 "trtype": "TCP", 00:10:24.800 "adrfam": "IPv4", 00:10:24.800 "traddr": "10.0.0.3", 00:10:24.800 "trsvcid": "4420" 00:10:24.800 }, 00:10:24.801 "peer_address": { 00:10:24.801 "trtype": "TCP", 00:10:24.801 "adrfam": "IPv4", 00:10:24.801 "traddr": "10.0.0.1", 00:10:24.801 "trsvcid": "56304" 00:10:24.801 }, 00:10:24.801 "auth": { 00:10:24.801 "state": "completed", 00:10:24.801 "digest": "sha512", 00:10:24.801 "dhgroup": "null" 00:10:24.801 } 00:10:24.801 } 00:10:24.801 ]' 00:10:24.801 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.060 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:25.060 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.060 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:25.060 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.060 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.060 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.060 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.357 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:25.357 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.942 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.202 00:10:26.202 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.202 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.202 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.463 { 00:10:26.463 "cntlid": 105, 00:10:26.463 "qid": 0, 00:10:26.463 "state": "enabled", 00:10:26.463 "thread": "nvmf_tgt_poll_group_000", 00:10:26.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:26.463 "listen_address": { 00:10:26.463 "trtype": "TCP", 00:10:26.463 "adrfam": "IPv4", 00:10:26.463 "traddr": "10.0.0.3", 00:10:26.463 "trsvcid": "4420" 00:10:26.463 }, 00:10:26.463 "peer_address": { 00:10:26.463 "trtype": "TCP", 00:10:26.463 "adrfam": "IPv4", 00:10:26.463 "traddr": "10.0.0.1", 00:10:26.463 "trsvcid": "56342" 00:10:26.463 }, 00:10:26.463 "auth": { 00:10:26.463 "state": "completed", 00:10:26.463 "digest": "sha512", 00:10:26.463 "dhgroup": "ffdhe2048" 00:10:26.463 } 00:10:26.463 } 00:10:26.463 ]' 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:26.463 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.463 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:26.463 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.723 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.723 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.723 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.723 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:26.723 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:27.295 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.557 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.818 00:10:27.818 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.818 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.818 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.082 { 00:10:28.082 "cntlid": 107, 00:10:28.082 "qid": 0, 00:10:28.082 "state": "enabled", 00:10:28.082 "thread": "nvmf_tgt_poll_group_000", 00:10:28.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:28.082 "listen_address": { 00:10:28.082 "trtype": "TCP", 00:10:28.082 "adrfam": "IPv4", 00:10:28.082 "traddr": "10.0.0.3", 00:10:28.082 "trsvcid": "4420" 00:10:28.082 }, 00:10:28.082 "peer_address": { 00:10:28.082 "trtype": "TCP", 00:10:28.082 "adrfam": "IPv4", 00:10:28.082 "traddr": "10.0.0.1", 00:10:28.082 "trsvcid": "56358" 00:10:28.082 }, 00:10:28.082 "auth": { 00:10:28.082 "state": "completed", 00:10:28.082 "digest": "sha512", 00:10:28.082 "dhgroup": "ffdhe2048" 00:10:28.082 } 00:10:28.082 } 00:10:28.082 ]' 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:28.082 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.344 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.344 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.344 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.344 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:28.345 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:28.917 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.178 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.439 00:10:29.439 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.439 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.439 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.701 { 00:10:29.701 "cntlid": 109, 00:10:29.701 "qid": 0, 00:10:29.701 "state": "enabled", 00:10:29.701 "thread": "nvmf_tgt_poll_group_000", 00:10:29.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:29.701 "listen_address": { 00:10:29.701 "trtype": "TCP", 00:10:29.701 "adrfam": "IPv4", 00:10:29.701 "traddr": "10.0.0.3", 00:10:29.701 "trsvcid": "4420" 00:10:29.701 }, 00:10:29.701 "peer_address": { 00:10:29.701 "trtype": "TCP", 00:10:29.701 "adrfam": "IPv4", 00:10:29.701 "traddr": "10.0.0.1", 00:10:29.701 "trsvcid": "36598" 00:10:29.701 }, 00:10:29.701 "auth": { 00:10:29.701 "state": "completed", 00:10:29.701 "digest": "sha512", 00:10:29.701 "dhgroup": "ffdhe2048" 00:10:29.701 } 00:10:29.701 } 00:10:29.701 ]' 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.701 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.962 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:29.962 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:30.529 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.529 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:30.529 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.529 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.529 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.529 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.529 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:30.529 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.788 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:31.046 00:10:31.046 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.046 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.046 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.304 { 00:10:31.304 "cntlid": 111, 00:10:31.304 "qid": 0, 00:10:31.304 "state": "enabled", 00:10:31.304 "thread": "nvmf_tgt_poll_group_000", 00:10:31.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:31.304 "listen_address": { 00:10:31.304 "trtype": "TCP", 00:10:31.304 "adrfam": "IPv4", 00:10:31.304 "traddr": "10.0.0.3", 00:10:31.304 "trsvcid": "4420" 00:10:31.304 }, 00:10:31.304 "peer_address": { 00:10:31.304 "trtype": "TCP", 00:10:31.304 "adrfam": "IPv4", 00:10:31.304 "traddr": "10.0.0.1", 00:10:31.304 "trsvcid": "36636" 00:10:31.304 }, 00:10:31.304 "auth": { 00:10:31.304 "state": "completed", 00:10:31.304 "digest": "sha512", 00:10:31.304 "dhgroup": "ffdhe2048" 00:10:31.304 } 00:10:31.304 } 00:10:31.304 ]' 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.304 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.562 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:31.562 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:32.127 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.385 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.643 00:10:32.643 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.643 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.643 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.900 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.900 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.900 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.900 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.900 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.900 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.900 { 00:10:32.901 "cntlid": 113, 00:10:32.901 "qid": 0, 00:10:32.901 "state": "enabled", 00:10:32.901 "thread": "nvmf_tgt_poll_group_000", 00:10:32.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:32.901 "listen_address": { 00:10:32.901 "trtype": "TCP", 00:10:32.901 "adrfam": "IPv4", 00:10:32.901 "traddr": "10.0.0.3", 00:10:32.901 "trsvcid": "4420" 00:10:32.901 }, 00:10:32.901 "peer_address": { 00:10:32.901 "trtype": "TCP", 00:10:32.901 "adrfam": "IPv4", 00:10:32.901 "traddr": "10.0.0.1", 00:10:32.901 "trsvcid": "36652" 00:10:32.901 }, 00:10:32.901 "auth": { 00:10:32.901 "state": "completed", 00:10:32.901 "digest": "sha512", 00:10:32.901 "dhgroup": "ffdhe3072" 00:10:32.901 } 00:10:32.901 } 00:10:32.901 ]' 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.901 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.159 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:33.159 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:33.741 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.999 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.256 00:10:34.256 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.256 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.256 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.513 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.513 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.513 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.513 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.513 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.513 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.513 { 00:10:34.513 "cntlid": 115, 00:10:34.514 "qid": 0, 00:10:34.514 "state": "enabled", 00:10:34.514 "thread": "nvmf_tgt_poll_group_000", 00:10:34.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:34.514 "listen_address": { 00:10:34.514 "trtype": "TCP", 00:10:34.514 "adrfam": "IPv4", 00:10:34.514 "traddr": "10.0.0.3", 00:10:34.514 "trsvcid": "4420" 00:10:34.514 }, 00:10:34.514 "peer_address": { 00:10:34.514 "trtype": "TCP", 00:10:34.514 "adrfam": "IPv4", 00:10:34.514 "traddr": "10.0.0.1", 00:10:34.514 "trsvcid": "36670" 00:10:34.514 }, 00:10:34.514 "auth": { 00:10:34.514 "state": "completed", 00:10:34.514 "digest": "sha512", 00:10:34.514 "dhgroup": "ffdhe3072" 00:10:34.514 } 00:10:34.514 } 00:10:34.514 ]' 00:10:34.514 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.514 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:34.514 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.514 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:34.514 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.514 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.514 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.514 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.774 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:34.774 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:35.339 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.597 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.854 00:10:35.854 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.854 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.854 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.112 { 00:10:36.112 "cntlid": 117, 00:10:36.112 "qid": 0, 00:10:36.112 "state": "enabled", 00:10:36.112 "thread": "nvmf_tgt_poll_group_000", 00:10:36.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:36.112 "listen_address": { 00:10:36.112 "trtype": "TCP", 00:10:36.112 "adrfam": "IPv4", 00:10:36.112 "traddr": "10.0.0.3", 00:10:36.112 "trsvcid": "4420" 00:10:36.112 }, 00:10:36.112 "peer_address": { 00:10:36.112 "trtype": "TCP", 00:10:36.112 "adrfam": "IPv4", 00:10:36.112 "traddr": "10.0.0.1", 00:10:36.112 "trsvcid": "36698" 00:10:36.112 }, 00:10:36.112 "auth": { 00:10:36.112 "state": "completed", 00:10:36.112 "digest": "sha512", 00:10:36.112 "dhgroup": "ffdhe3072" 00:10:36.112 } 00:10:36.112 } 00:10:36.112 ]' 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.112 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.369 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:36.369 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:36.934 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:37.203 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:10:37.203 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.203 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:37.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:37.474 00:10:37.474 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.474 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.474 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.733 { 00:10:37.733 "cntlid": 119, 00:10:37.733 "qid": 0, 00:10:37.733 "state": "enabled", 00:10:37.733 "thread": "nvmf_tgt_poll_group_000", 00:10:37.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:37.733 "listen_address": { 00:10:37.733 "trtype": "TCP", 00:10:37.733 "adrfam": "IPv4", 00:10:37.733 "traddr": "10.0.0.3", 00:10:37.733 "trsvcid": "4420" 00:10:37.733 }, 00:10:37.733 "peer_address": { 00:10:37.733 "trtype": "TCP", 00:10:37.733 "adrfam": "IPv4", 00:10:37.733 "traddr": "10.0.0.1", 00:10:37.733 "trsvcid": "36720" 00:10:37.733 }, 00:10:37.733 "auth": { 00:10:37.733 "state": "completed", 00:10:37.733 "digest": "sha512", 00:10:37.733 "dhgroup": "ffdhe3072" 00:10:37.733 } 00:10:37.733 } 00:10:37.733 ]' 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.733 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.992 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:37.992 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:38.558 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.816 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.075 00:10:39.075 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.075 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.075 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.334 { 00:10:39.334 "cntlid": 121, 00:10:39.334 "qid": 0, 00:10:39.334 "state": "enabled", 00:10:39.334 "thread": "nvmf_tgt_poll_group_000", 00:10:39.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:39.334 "listen_address": { 00:10:39.334 "trtype": "TCP", 00:10:39.334 "adrfam": "IPv4", 00:10:39.334 "traddr": "10.0.0.3", 00:10:39.334 "trsvcid": "4420" 00:10:39.334 }, 00:10:39.334 "peer_address": { 00:10:39.334 "trtype": "TCP", 00:10:39.334 "adrfam": "IPv4", 00:10:39.334 "traddr": "10.0.0.1", 00:10:39.334 "trsvcid": "36750" 00:10:39.334 }, 00:10:39.334 "auth": { 00:10:39.334 "state": "completed", 00:10:39.334 "digest": "sha512", 00:10:39.334 "dhgroup": "ffdhe4096" 00:10:39.334 } 00:10:39.334 } 00:10:39.334 ]' 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:39.334 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.592 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:39.592 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.592 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.592 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.592 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.850 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:39.850 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.416 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.998 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.998 { 00:10:40.998 "cntlid": 123, 00:10:40.998 "qid": 0, 00:10:40.998 "state": "enabled", 00:10:40.998 "thread": "nvmf_tgt_poll_group_000", 00:10:40.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:40.998 "listen_address": { 00:10:40.998 "trtype": "TCP", 00:10:40.998 "adrfam": "IPv4", 00:10:40.998 "traddr": "10.0.0.3", 00:10:40.998 "trsvcid": "4420" 00:10:40.998 }, 00:10:40.998 "peer_address": { 00:10:40.998 "trtype": "TCP", 00:10:40.998 "adrfam": "IPv4", 00:10:40.998 "traddr": "10.0.0.1", 00:10:40.998 "trsvcid": "36832" 00:10:40.998 }, 00:10:40.998 "auth": { 00:10:40.998 "state": "completed", 00:10:40.998 "digest": "sha512", 00:10:40.998 "dhgroup": "ffdhe4096" 00:10:40.998 } 00:10:40.998 } 00:10:40.998 ]' 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.998 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.269 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.269 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.269 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.269 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:41.269 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:41.835 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.836 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:41.836 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.836 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.094 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.661 00:10:42.661 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.661 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.661 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.661 { 00:10:42.661 "cntlid": 125, 00:10:42.661 "qid": 0, 00:10:42.661 "state": "enabled", 00:10:42.661 "thread": "nvmf_tgt_poll_group_000", 00:10:42.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:42.661 "listen_address": { 00:10:42.661 "trtype": "TCP", 00:10:42.661 "adrfam": "IPv4", 00:10:42.661 "traddr": "10.0.0.3", 00:10:42.661 "trsvcid": "4420" 00:10:42.661 }, 00:10:42.661 "peer_address": { 00:10:42.661 "trtype": "TCP", 00:10:42.661 "adrfam": "IPv4", 00:10:42.661 "traddr": "10.0.0.1", 00:10:42.661 "trsvcid": "36858" 00:10:42.661 }, 00:10:42.661 "auth": { 00:10:42.661 "state": "completed", 00:10:42.661 "digest": "sha512", 00:10:42.661 "dhgroup": "ffdhe4096" 00:10:42.661 } 00:10:42.661 } 00:10:42.661 ]' 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:42.661 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.919 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.919 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.919 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.919 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:42.919 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.853 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.112 00:10:44.112 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.112 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.112 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.370 { 00:10:44.370 "cntlid": 127, 00:10:44.370 "qid": 0, 00:10:44.370 "state": "enabled", 00:10:44.370 "thread": "nvmf_tgt_poll_group_000", 00:10:44.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:44.370 "listen_address": { 00:10:44.370 "trtype": "TCP", 00:10:44.370 "adrfam": "IPv4", 00:10:44.370 "traddr": "10.0.0.3", 00:10:44.370 "trsvcid": "4420" 00:10:44.370 }, 00:10:44.370 "peer_address": { 00:10:44.370 "trtype": "TCP", 00:10:44.370 "adrfam": "IPv4", 00:10:44.370 "traddr": "10.0.0.1", 00:10:44.370 "trsvcid": "36882" 00:10:44.370 }, 00:10:44.370 "auth": { 00:10:44.370 "state": "completed", 00:10:44.370 "digest": "sha512", 00:10:44.370 "dhgroup": "ffdhe4096" 00:10:44.370 } 00:10:44.370 } 00:10:44.370 ]' 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:44.370 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.628 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:44.628 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.628 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.629 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.629 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.629 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:44.629 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:45.263 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.521 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.779 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.038 { 00:10:46.038 "cntlid": 129, 00:10:46.038 "qid": 0, 00:10:46.038 "state": "enabled", 00:10:46.038 "thread": "nvmf_tgt_poll_group_000", 00:10:46.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:46.038 "listen_address": { 00:10:46.038 "trtype": "TCP", 00:10:46.038 "adrfam": "IPv4", 00:10:46.038 "traddr": "10.0.0.3", 00:10:46.038 "trsvcid": "4420" 00:10:46.038 }, 00:10:46.038 "peer_address": { 00:10:46.038 "trtype": "TCP", 00:10:46.038 "adrfam": "IPv4", 00:10:46.038 "traddr": "10.0.0.1", 00:10:46.038 "trsvcid": "36910" 00:10:46.038 }, 00:10:46.038 "auth": { 00:10:46.038 "state": "completed", 00:10:46.038 "digest": "sha512", 00:10:46.038 "dhgroup": "ffdhe6144" 00:10:46.038 } 00:10:46.038 } 00:10:46.038 ]' 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:46.038 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.296 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:46.296 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.296 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.296 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.296 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.296 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:46.296 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.227 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.485 00:10:47.485 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.485 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.485 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.743 { 00:10:47.743 "cntlid": 131, 00:10:47.743 "qid": 0, 00:10:47.743 "state": "enabled", 00:10:47.743 "thread": "nvmf_tgt_poll_group_000", 00:10:47.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:47.743 "listen_address": { 00:10:47.743 "trtype": "TCP", 00:10:47.743 "adrfam": "IPv4", 00:10:47.743 "traddr": "10.0.0.3", 00:10:47.743 "trsvcid": "4420" 00:10:47.743 }, 00:10:47.743 "peer_address": { 00:10:47.743 "trtype": "TCP", 00:10:47.743 "adrfam": "IPv4", 00:10:47.743 "traddr": "10.0.0.1", 00:10:47.743 "trsvcid": "36936" 00:10:47.743 }, 00:10:47.743 "auth": { 00:10:47.743 "state": "completed", 00:10:47.743 "digest": "sha512", 00:10:47.743 "dhgroup": "ffdhe6144" 00:10:47.743 } 00:10:47.743 } 00:10:47.743 ]' 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.743 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.001 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:48.001 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:48.583 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.841 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.405 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.405 { 00:10:49.405 "cntlid": 133, 00:10:49.405 "qid": 0, 00:10:49.405 "state": "enabled", 00:10:49.405 "thread": "nvmf_tgt_poll_group_000", 00:10:49.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:49.405 "listen_address": { 00:10:49.405 "trtype": "TCP", 00:10:49.405 "adrfam": "IPv4", 00:10:49.405 "traddr": "10.0.0.3", 00:10:49.405 "trsvcid": "4420" 00:10:49.405 }, 00:10:49.405 "peer_address": { 00:10:49.405 "trtype": "TCP", 00:10:49.405 "adrfam": "IPv4", 00:10:49.405 "traddr": "10.0.0.1", 00:10:49.405 "trsvcid": "36968" 00:10:49.405 }, 00:10:49.405 "auth": { 00:10:49.405 "state": "completed", 00:10:49.405 "digest": "sha512", 00:10:49.405 "dhgroup": "ffdhe6144" 00:10:49.405 } 00:10:49.405 } 00:10:49.405 ]' 00:10:49.405 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.662 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:49.662 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.662 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:49.662 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.662 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.662 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.662 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.918 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:49.918 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:50.481 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.738 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.995 00:10:50.995 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.995 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.995 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.317 { 00:10:51.317 "cntlid": 135, 00:10:51.317 "qid": 0, 00:10:51.317 "state": "enabled", 00:10:51.317 "thread": "nvmf_tgt_poll_group_000", 00:10:51.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:51.317 "listen_address": { 00:10:51.317 "trtype": "TCP", 00:10:51.317 "adrfam": "IPv4", 00:10:51.317 "traddr": "10.0.0.3", 00:10:51.317 "trsvcid": "4420" 00:10:51.317 }, 00:10:51.317 "peer_address": { 00:10:51.317 "trtype": "TCP", 00:10:51.317 "adrfam": "IPv4", 00:10:51.317 "traddr": "10.0.0.1", 00:10:51.317 "trsvcid": "34908" 00:10:51.317 }, 00:10:51.317 "auth": { 00:10:51.317 "state": "completed", 00:10:51.317 "digest": "sha512", 00:10:51.317 "dhgroup": "ffdhe6144" 00:10:51.317 } 00:10:51.317 } 00:10:51.317 ]' 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.317 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.593 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:51.593 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:52.165 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.422 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.423 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.423 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.993 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.993 { 00:10:52.993 "cntlid": 137, 00:10:52.993 "qid": 0, 00:10:52.993 "state": "enabled", 00:10:52.993 "thread": "nvmf_tgt_poll_group_000", 00:10:52.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:52.993 "listen_address": { 00:10:52.993 "trtype": "TCP", 00:10:52.993 "adrfam": "IPv4", 00:10:52.993 "traddr": "10.0.0.3", 00:10:52.993 "trsvcid": "4420" 00:10:52.993 }, 00:10:52.993 "peer_address": { 00:10:52.993 "trtype": "TCP", 00:10:52.993 "adrfam": "IPv4", 00:10:52.993 "traddr": "10.0.0.1", 00:10:52.993 "trsvcid": "34940" 00:10:52.993 }, 00:10:52.993 "auth": { 00:10:52.993 "state": "completed", 00:10:52.993 "digest": "sha512", 00:10:52.993 "dhgroup": "ffdhe8192" 00:10:52.993 } 00:10:52.993 } 00:10:52.993 ]' 00:10:52.993 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.252 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:53.252 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.252 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.252 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.252 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.252 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.252 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.510 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:53.510 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.080 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.653 00:10:54.653 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.653 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.653 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.971 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.971 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.971 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.971 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.971 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.971 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.971 { 00:10:54.971 "cntlid": 139, 00:10:54.971 "qid": 0, 00:10:54.971 "state": "enabled", 00:10:54.971 "thread": "nvmf_tgt_poll_group_000", 00:10:54.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:54.971 "listen_address": { 00:10:54.971 "trtype": "TCP", 00:10:54.971 "adrfam": "IPv4", 00:10:54.972 "traddr": "10.0.0.3", 00:10:54.972 "trsvcid": "4420" 00:10:54.972 }, 00:10:54.972 "peer_address": { 00:10:54.972 "trtype": "TCP", 00:10:54.972 "adrfam": "IPv4", 00:10:54.972 "traddr": "10.0.0.1", 00:10:54.972 "trsvcid": "34962" 00:10:54.972 }, 00:10:54.972 "auth": { 00:10:54.972 "state": "completed", 00:10:54.972 "digest": "sha512", 00:10:54.972 "dhgroup": "ffdhe8192" 00:10:54.972 } 00:10:54.972 } 00:10:54.972 ]' 00:10:54.972 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.972 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:54.972 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.972 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:54.972 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.233 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.233 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.233 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.233 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:55.234 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: --dhchap-ctrl-secret DHHC-1:02:ZjgxN2Y0Y2FkYjNlYjY3NjlmMjY1YWRiNjM0MGNhYTFmZDM2MGMyYTJkYjdkNmIyYHOdWw==: 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:55.800 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.059 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.624 00:10:56.624 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.624 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.624 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.882 { 00:10:56.882 "cntlid": 141, 00:10:56.882 "qid": 0, 00:10:56.882 "state": "enabled", 00:10:56.882 "thread": "nvmf_tgt_poll_group_000", 00:10:56.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:56.882 "listen_address": { 00:10:56.882 "trtype": "TCP", 00:10:56.882 "adrfam": "IPv4", 00:10:56.882 "traddr": "10.0.0.3", 00:10:56.882 "trsvcid": "4420" 00:10:56.882 }, 00:10:56.882 "peer_address": { 00:10:56.882 "trtype": "TCP", 00:10:56.882 "adrfam": "IPv4", 00:10:56.882 "traddr": "10.0.0.1", 00:10:56.882 "trsvcid": "34992" 00:10:56.882 }, 00:10:56.882 "auth": { 00:10:56.882 "state": "completed", 00:10:56.882 "digest": "sha512", 00:10:56.882 "dhgroup": "ffdhe8192" 00:10:56.882 } 00:10:56.882 } 00:10:56.882 ]' 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.882 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.140 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:57.140 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:01:YzY0MzJjYmVmNDViNWE3NjQ4OWY5YjI5ZWVjMWFkMjY8F+b1: 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:57.707 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:57.964 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:10:57.964 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:57.965 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.529 00:10:58.529 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.529 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.529 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.529 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.529 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.529 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.529 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.786 { 00:10:58.786 "cntlid": 143, 00:10:58.786 "qid": 0, 00:10:58.786 "state": "enabled", 00:10:58.786 "thread": "nvmf_tgt_poll_group_000", 00:10:58.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:10:58.786 "listen_address": { 00:10:58.786 "trtype": "TCP", 00:10:58.786 "adrfam": "IPv4", 00:10:58.786 "traddr": "10.0.0.3", 00:10:58.786 "trsvcid": "4420" 00:10:58.786 }, 00:10:58.786 "peer_address": { 00:10:58.786 "trtype": "TCP", 00:10:58.786 "adrfam": "IPv4", 00:10:58.786 "traddr": "10.0.0.1", 00:10:58.786 "trsvcid": "35016" 00:10:58.786 }, 00:10:58.786 "auth": { 00:10:58.786 "state": "completed", 00:10:58.786 "digest": "sha512", 00:10:58.786 "dhgroup": "ffdhe8192" 00:10:58.786 } 00:10:58.786 } 00:10:58.786 ]' 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.786 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.044 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:59.044 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:59.609 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.609 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.175 00:11:00.175 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.175 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.175 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.433 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.433 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.433 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.433 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.433 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.433 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.433 { 00:11:00.433 "cntlid": 145, 00:11:00.433 "qid": 0, 00:11:00.433 "state": "enabled", 00:11:00.433 "thread": "nvmf_tgt_poll_group_000", 00:11:00.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:00.433 "listen_address": { 00:11:00.433 "trtype": "TCP", 00:11:00.433 "adrfam": "IPv4", 00:11:00.433 "traddr": "10.0.0.3", 00:11:00.433 "trsvcid": "4420" 00:11:00.433 }, 00:11:00.433 "peer_address": { 00:11:00.433 "trtype": "TCP", 00:11:00.433 "adrfam": "IPv4", 00:11:00.433 "traddr": "10.0.0.1", 00:11:00.433 "trsvcid": "55064" 00:11:00.433 }, 00:11:00.433 "auth": { 00:11:00.433 "state": "completed", 00:11:00.433 "digest": "sha512", 00:11:00.433 "dhgroup": "ffdhe8192" 00:11:00.433 } 00:11:00.433 } 00:11:00.433 ]' 00:11:00.433 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.434 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:00.434 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.434 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:00.434 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.434 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.434 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.434 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.691 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:11:00.691 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:00:ZmE5MzcyNzFhNzk5NWU5ZWU0NDE2MDc2YmM3ZjRhNjJiM2QwNzEwYTg1MGNmOTY3VXGQYA==: --dhchap-ctrl-secret DHHC-1:03:MjI3MmU3YzAzMWU1YjE2YmNlNDAyMTZiYzhiZmQ1ODc4NTE1MmQ2ZjYxZjYxM2U5YzRmYjUyMGEzZGM2MDAwZJ4UztA=: 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:01.260 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:01.828 request: 00:11:01.828 { 00:11:01.828 "name": "nvme0", 00:11:01.828 "trtype": "tcp", 00:11:01.828 "traddr": "10.0.0.3", 00:11:01.828 "adrfam": "ipv4", 00:11:01.828 "trsvcid": "4420", 00:11:01.828 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:01.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:01.828 "prchk_reftag": false, 00:11:01.828 "prchk_guard": false, 00:11:01.828 "hdgst": false, 00:11:01.828 "ddgst": false, 00:11:01.828 "dhchap_key": "key2", 00:11:01.828 "allow_unrecognized_csi": false, 00:11:01.828 "method": "bdev_nvme_attach_controller", 00:11:01.828 "req_id": 1 00:11:01.828 } 00:11:01.828 Got JSON-RPC error response 00:11:01.828 response: 00:11:01.828 { 00:11:01.828 "code": -5, 00:11:01.828 "message": "Input/output error" 00:11:01.828 } 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.828 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:01.829 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:01.829 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:02.398 request: 00:11:02.398 { 00:11:02.398 "name": "nvme0", 00:11:02.398 "trtype": "tcp", 00:11:02.398 "traddr": "10.0.0.3", 00:11:02.398 "adrfam": "ipv4", 00:11:02.398 "trsvcid": "4420", 00:11:02.398 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:02.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:02.398 "prchk_reftag": false, 00:11:02.398 "prchk_guard": false, 00:11:02.398 "hdgst": false, 00:11:02.398 "ddgst": false, 00:11:02.398 "dhchap_key": "key1", 00:11:02.398 "dhchap_ctrlr_key": "ckey2", 00:11:02.398 "allow_unrecognized_csi": false, 00:11:02.398 "method": "bdev_nvme_attach_controller", 00:11:02.398 "req_id": 1 00:11:02.398 } 00:11:02.398 Got JSON-RPC error response 00:11:02.398 response: 00:11:02.398 { 00:11:02.398 "code": -5, 00:11:02.398 "message": "Input/output error" 00:11:02.398 } 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.398 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.656 request: 00:11:02.656 { 00:11:02.656 "name": "nvme0", 00:11:02.656 "trtype": "tcp", 00:11:02.656 "traddr": "10.0.0.3", 00:11:02.656 "adrfam": "ipv4", 00:11:02.657 "trsvcid": "4420", 00:11:02.657 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:02.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:02.657 "prchk_reftag": false, 00:11:02.657 "prchk_guard": false, 00:11:02.657 "hdgst": false, 00:11:02.657 "ddgst": false, 00:11:02.657 "dhchap_key": "key1", 00:11:02.657 "dhchap_ctrlr_key": "ckey1", 00:11:02.657 "allow_unrecognized_csi": false, 00:11:02.657 "method": "bdev_nvme_attach_controller", 00:11:02.657 "req_id": 1 00:11:02.657 } 00:11:02.657 Got JSON-RPC error response 00:11:02.657 response: 00:11:02.657 { 00:11:02.657 "code": -5, 00:11:02.657 "message": "Input/output error" 00:11:02.657 } 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66619 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66619 ']' 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66619 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.657 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66619 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.915 killing process with pid 66619 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66619' 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66619 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66619 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69355 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69355 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69355 ']' 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.915 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 69355 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69355 ']' 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.847 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 null0 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hs9 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Ho5 ]] 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ho5 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2wi 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ArJ ]] 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ArJ 00:11:04.105 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4TD 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.oRY ]] 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oRY 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tct 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:04.106 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.038 nvme0n1 00:11:05.038 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.038 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.038 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.296 { 00:11:05.296 "cntlid": 1, 00:11:05.296 "qid": 0, 00:11:05.296 "state": "enabled", 00:11:05.296 "thread": "nvmf_tgt_poll_group_000", 00:11:05.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:05.296 "listen_address": { 00:11:05.296 "trtype": "TCP", 00:11:05.296 "adrfam": "IPv4", 00:11:05.296 "traddr": "10.0.0.3", 00:11:05.296 "trsvcid": "4420" 00:11:05.296 }, 00:11:05.296 "peer_address": { 00:11:05.296 "trtype": "TCP", 00:11:05.296 "adrfam": "IPv4", 00:11:05.296 "traddr": "10.0.0.1", 00:11:05.296 "trsvcid": "55128" 00:11:05.296 }, 00:11:05.296 "auth": { 00:11:05.296 "state": "completed", 00:11:05.296 "digest": "sha512", 00:11:05.296 "dhgroup": "ffdhe8192" 00:11:05.296 } 00:11:05.296 } 00:11:05.296 ]' 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.296 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.553 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:11:05.553 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key3 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:11:06.124 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.382 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.640 request: 00:11:06.640 { 00:11:06.640 "name": "nvme0", 00:11:06.640 "trtype": "tcp", 00:11:06.640 "traddr": "10.0.0.3", 00:11:06.640 "adrfam": "ipv4", 00:11:06.640 "trsvcid": "4420", 00:11:06.640 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:06.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:06.640 "prchk_reftag": false, 00:11:06.640 "prchk_guard": false, 00:11:06.640 "hdgst": false, 00:11:06.640 "ddgst": false, 00:11:06.640 "dhchap_key": "key3", 00:11:06.640 "allow_unrecognized_csi": false, 00:11:06.640 "method": "bdev_nvme_attach_controller", 00:11:06.640 "req_id": 1 00:11:06.640 } 00:11:06.640 Got JSON-RPC error response 00:11:06.640 response: 00:11:06.640 { 00:11:06.640 "code": -5, 00:11:06.640 "message": "Input/output error" 00:11:06.640 } 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:06.640 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.902 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.902 request: 00:11:06.902 { 00:11:06.902 "name": "nvme0", 00:11:06.902 "trtype": "tcp", 00:11:06.902 "traddr": "10.0.0.3", 00:11:06.902 "adrfam": "ipv4", 00:11:06.902 "trsvcid": "4420", 00:11:06.902 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:06.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:06.902 "prchk_reftag": false, 00:11:06.902 "prchk_guard": false, 00:11:06.902 "hdgst": false, 00:11:06.902 "ddgst": false, 00:11:06.902 "dhchap_key": "key3", 00:11:06.902 "allow_unrecognized_csi": false, 00:11:06.902 "method": "bdev_nvme_attach_controller", 00:11:06.902 "req_id": 1 00:11:06.902 } 00:11:06.902 Got JSON-RPC error response 00:11:06.902 response: 00:11:06.902 { 00:11:06.902 "code": -5, 00:11:06.902 "message": "Input/output error" 00:11:06.902 } 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:07.164 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:07.735 request: 00:11:07.735 { 00:11:07.735 "name": "nvme0", 00:11:07.735 "trtype": "tcp", 00:11:07.735 "traddr": "10.0.0.3", 00:11:07.735 "adrfam": "ipv4", 00:11:07.735 "trsvcid": "4420", 00:11:07.735 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:07.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:07.735 "prchk_reftag": false, 00:11:07.735 "prchk_guard": false, 00:11:07.735 "hdgst": false, 00:11:07.735 "ddgst": false, 00:11:07.735 "dhchap_key": "key0", 00:11:07.735 "dhchap_ctrlr_key": "key1", 00:11:07.735 "allow_unrecognized_csi": false, 00:11:07.735 "method": "bdev_nvme_attach_controller", 00:11:07.735 "req_id": 1 00:11:07.735 } 00:11:07.735 Got JSON-RPC error response 00:11:07.735 response: 00:11:07.735 { 00:11:07.735 "code": -5, 00:11:07.735 "message": "Input/output error" 00:11:07.735 } 00:11:07.735 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:07.735 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:07.735 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:07.735 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:07.735 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:11:07.735 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:07.735 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:07.994 nvme0n1 00:11:07.994 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:11:07.994 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:11:07.994 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.994 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.994 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.994 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.252 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 00:11:08.252 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.252 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.252 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.252 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:08.253 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:08.253 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:09.185 nvme0n1 00:11:09.186 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:11:09.186 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:11:09.186 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:11:09.443 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.700 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.700 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:11:09.700 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid 38d6bd30-54c5-4858-a242-ab15764fb2d9 -l 0 --dhchap-secret DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: --dhchap-ctrl-secret DHHC-1:03:NGM1OGY0MjY0NTg3YzNjMzIyYzM3M2VkZWU4YjA3OTNmY2QyYWU2NjNmZDBhOWJmOGRlZWRjNGI3ZTM4MWM2M56FbNc=: 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:10.267 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:10.833 request: 00:11:10.833 { 00:11:10.833 "name": "nvme0", 00:11:10.833 "trtype": "tcp", 00:11:10.833 "traddr": "10.0.0.3", 00:11:10.833 "adrfam": "ipv4", 00:11:10.833 "trsvcid": "4420", 00:11:10.833 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:10.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9", 00:11:10.833 "prchk_reftag": false, 00:11:10.833 "prchk_guard": false, 00:11:10.833 "hdgst": false, 00:11:10.833 "ddgst": false, 00:11:10.833 "dhchap_key": "key1", 00:11:10.833 "allow_unrecognized_csi": false, 00:11:10.833 "method": "bdev_nvme_attach_controller", 00:11:10.833 "req_id": 1 00:11:10.833 } 00:11:10.833 Got JSON-RPC error response 00:11:10.833 response: 00:11:10.833 { 00:11:10.833 "code": -5, 00:11:10.833 "message": "Input/output error" 00:11:10.833 } 00:11:10.833 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:10.833 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.833 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.833 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.833 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:10.833 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:10.833 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:11.766 nvme0n1 00:11:11.766 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:11:11.766 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.766 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:11:11.766 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.766 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.766 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.024 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:12.024 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.024 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.024 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.024 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:11:12.024 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:12.024 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:12.282 nvme0n1 00:11:12.282 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:11:12.282 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:11:12.283 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.540 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.540 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.540 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: '' 2s 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: ]] 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzE5NzI5NzZlNzU0YjJkMDVkYTU1MTYwNjUyNThlMTDCyRG8: 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:12.798 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:14.700 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:11:14.700 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:14.700 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:14.700 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:14.700 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:14.700 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key1 --dhchap-ctrlr-key key2 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: 2s 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: ]] 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTljZjViOTNhNDcxZGMxYWU3ZDczOTYzYTI3NGJiMGZmNTdmZTQ1NzUyNzg2MWJhVuHWXQ==: 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:14.958 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:16.959 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:17.895 nvme0n1 00:11:17.895 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:17.895 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.895 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.895 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.895 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:17.895 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:18.154 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:11:18.154 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:11:18.154 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.412 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.412 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:18.412 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.412 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.412 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.412 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:11:18.412 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:11:18.670 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:11:18.670 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:11:18.670 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:18.928 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:19.557 request: 00:11:19.557 { 00:11:19.557 "name": "nvme0", 00:11:19.557 "dhchap_key": "key1", 00:11:19.557 "dhchap_ctrlr_key": "key3", 00:11:19.557 "method": "bdev_nvme_set_keys", 00:11:19.557 "req_id": 1 00:11:19.557 } 00:11:19.557 Got JSON-RPC error response 00:11:19.557 response: 00:11:19.557 { 00:11:19.557 "code": -13, 00:11:19.557 "message": "Permission denied" 00:11:19.557 } 00:11:19.557 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:19.557 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:19.557 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:19.557 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:19.557 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:19.557 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:19.557 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.557 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:11:19.557 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:11:20.930 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:20.931 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:21.865 nvme0n1 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:21.865 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:22.123 request: 00:11:22.123 { 00:11:22.123 "name": "nvme0", 00:11:22.123 "dhchap_key": "key2", 00:11:22.123 "dhchap_ctrlr_key": "key0", 00:11:22.123 "method": "bdev_nvme_set_keys", 00:11:22.123 "req_id": 1 00:11:22.123 } 00:11:22.123 Got JSON-RPC error response 00:11:22.123 response: 00:11:22.123 { 00:11:22.123 "code": -13, 00:11:22.123 "message": "Permission denied" 00:11:22.123 } 00:11:22.123 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:22.123 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:22.123 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:22.123 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:22.123 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:22.123 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.123 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:22.380 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:11:22.380 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:11:23.319 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:23.319 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:23.319 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66651 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66651 ']' 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66651 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66651 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:23.577 killing process with pid 66651 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66651' 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66651 00:11:23.577 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66651 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.834 rmmod nvme_tcp 00:11:23.834 rmmod nvme_fabrics 00:11:23.834 rmmod nvme_keyring 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 69355 ']' 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 69355 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69355 ']' 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69355 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69355 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.834 killing process with pid 69355 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69355' 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69355 00:11:23.834 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69355 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:24.091 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hs9 /tmp/spdk.key-sha256.2wi /tmp/spdk.key-sha384.4TD /tmp/spdk.key-sha512.tct /tmp/spdk.key-sha512.Ho5 /tmp/spdk.key-sha384.ArJ /tmp/spdk.key-sha256.oRY '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:11:24.349 00:11:24.349 real 2m32.298s 00:11:24.349 user 5m59.081s 00:11:24.349 sys 0m19.877s 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.349 ************************************ 00:11:24.349 END TEST nvmf_auth_target 00:11:24.349 ************************************ 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.349 ************************************ 00:11:24.349 START TEST nvmf_bdevio_no_huge 00:11:24.349 ************************************ 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:24.349 * Looking for test storage... 00:11:24.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.349 --rc genhtml_branch_coverage=1 00:11:24.349 --rc genhtml_function_coverage=1 00:11:24.349 --rc genhtml_legend=1 00:11:24.349 --rc geninfo_all_blocks=1 00:11:24.349 --rc geninfo_unexecuted_blocks=1 00:11:24.349 00:11:24.349 ' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.349 --rc genhtml_branch_coverage=1 00:11:24.349 --rc genhtml_function_coverage=1 00:11:24.349 --rc genhtml_legend=1 00:11:24.349 --rc geninfo_all_blocks=1 00:11:24.349 --rc geninfo_unexecuted_blocks=1 00:11:24.349 00:11:24.349 ' 00:11:24.349 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.349 --rc genhtml_branch_coverage=1 00:11:24.349 --rc genhtml_function_coverage=1 00:11:24.349 --rc genhtml_legend=1 00:11:24.349 --rc geninfo_all_blocks=1 00:11:24.350 --rc geninfo_unexecuted_blocks=1 00:11:24.350 00:11:24.350 ' 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.350 --rc genhtml_branch_coverage=1 00:11:24.350 --rc genhtml_function_coverage=1 00:11:24.350 --rc genhtml_legend=1 00:11:24.350 --rc geninfo_all_blocks=1 00:11:24.350 --rc geninfo_unexecuted_blocks=1 00:11:24.350 00:11:24.350 ' 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.350 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.608 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.609 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:24.609 Cannot find device "nvmf_init_br" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:24.609 Cannot find device "nvmf_init_br2" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:24.609 Cannot find device "nvmf_tgt_br" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.609 Cannot find device "nvmf_tgt_br2" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:24.609 Cannot find device "nvmf_init_br" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:24.609 Cannot find device "nvmf_init_br2" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:24.609 Cannot find device "nvmf_tgt_br" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:24.609 Cannot find device "nvmf_tgt_br2" 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:11:24.609 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:24.609 Cannot find device "nvmf_br" 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:24.609 Cannot find device "nvmf_init_if" 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:24.609 Cannot find device "nvmf_init_if2" 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.609 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.609 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:24.609 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:24.610 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:24.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:24.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:11:24.867 00:11:24.867 --- 10.0.0.3 ping statistics --- 00:11:24.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.867 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:24.867 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:24.867 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:11:24.867 00:11:24.867 --- 10.0.0.4 ping statistics --- 00:11:24.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.867 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:24.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:24.867 00:11:24.867 --- 10.0.0.1 ping statistics --- 00:11:24.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.867 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:24.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:24.867 00:11:24.867 --- 10.0.0.2 ping statistics --- 00:11:24.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.867 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=69961 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 69961 00:11:24.867 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 69961 ']' 00:11:24.868 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:24.868 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.868 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.868 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.868 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.868 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:24.868 [2024-11-26 20:34:39.249941] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:24.868 [2024-11-26 20:34:39.250000] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:24.868 [2024-11-26 20:34:39.397412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.125 [2024-11-26 20:34:39.446887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.125 [2024-11-26 20:34:39.446927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.125 [2024-11-26 20:34:39.446933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.125 [2024-11-26 20:34:39.446938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.125 [2024-11-26 20:34:39.446942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.125 [2024-11-26 20:34:39.447450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:25.125 [2024-11-26 20:34:39.447494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:25.125 [2024-11-26 20:34:39.447675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:25.125 [2024-11-26 20:34:39.447678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.125 [2024-11-26 20:34:39.452348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 [2024-11-26 20:34:40.154849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 Malloc0 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 [2024-11-26 20:34:40.191037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:11:25.691 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:11:25.692 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:25.692 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:25.692 { 00:11:25.692 "params": { 00:11:25.692 "name": "Nvme$subsystem", 00:11:25.692 "trtype": "$TEST_TRANSPORT", 00:11:25.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:25.692 "adrfam": "ipv4", 00:11:25.692 "trsvcid": "$NVMF_PORT", 00:11:25.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:25.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:25.692 "hdgst": ${hdgst:-false}, 00:11:25.692 "ddgst": ${ddgst:-false} 00:11:25.692 }, 00:11:25.692 "method": "bdev_nvme_attach_controller" 00:11:25.692 } 00:11:25.692 EOF 00:11:25.692 )") 00:11:25.692 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:11:25.692 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:11:25.692 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:11:25.692 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:25.692 "params": { 00:11:25.692 "name": "Nvme1", 00:11:25.692 "trtype": "tcp", 00:11:25.692 "traddr": "10.0.0.3", 00:11:25.692 "adrfam": "ipv4", 00:11:25.692 "trsvcid": "4420", 00:11:25.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:25.692 "hdgst": false, 00:11:25.692 "ddgst": false 00:11:25.692 }, 00:11:25.692 "method": "bdev_nvme_attach_controller" 00:11:25.692 }' 00:11:25.692 [2024-11-26 20:34:40.231523] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:25.692 [2024-11-26 20:34:40.231583] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69996 ] 00:11:25.951 [2024-11-26 20:34:40.376747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:25.951 [2024-11-26 20:34:40.427342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.951 [2024-11-26 20:34:40.427412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.951 [2024-11-26 20:34:40.427411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.951 [2024-11-26 20:34:40.442514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.209 I/O targets: 00:11:26.209 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:26.209 00:11:26.209 00:11:26.209 CUnit - A unit testing framework for C - Version 2.1-3 00:11:26.209 http://cunit.sourceforge.net/ 00:11:26.209 00:11:26.209 00:11:26.209 Suite: bdevio tests on: Nvme1n1 00:11:26.209 Test: blockdev write read block ...passed 00:11:26.209 Test: blockdev write zeroes read block ...passed 00:11:26.209 Test: blockdev write zeroes read no split ...passed 00:11:26.209 Test: blockdev write zeroes read split ...passed 00:11:26.209 Test: blockdev write zeroes read split partial ...passed 00:11:26.209 Test: blockdev reset ...[2024-11-26 20:34:40.648414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:26.209 [2024-11-26 20:34:40.648973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe5320 (9): Bad file descriptor 00:11:26.209 [2024-11-26 20:34:40.665397] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:26.209 passed 00:11:26.209 Test: blockdev write read 8 blocks ...passed 00:11:26.209 Test: blockdev write read size > 128k ...passed 00:11:26.209 Test: blockdev write read invalid size ...passed 00:11:26.209 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:26.209 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:26.209 Test: blockdev write read max offset ...passed 00:11:26.209 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:26.209 Test: blockdev writev readv 8 blocks ...passed 00:11:26.209 Test: blockdev writev readv 30 x 1block ...passed 00:11:26.209 Test: blockdev writev readv block ...passed 00:11:26.209 Test: blockdev writev readv size > 128k ...passed 00:11:26.209 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:26.209 Test: blockdev comparev and writev ...[2024-11-26 20:34:40.670753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 [2024-11-26 20:34:40.670777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.670788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 [2024-11-26 20:34:40.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.671057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 [2024-11-26 20:34:40.671069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.671078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 [2024-11-26 20:34:40.671083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.671265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 [2024-11-26 20:34:40.671277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.671286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 [2024-11-26 20:34:40.671290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.671524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 passed 00:11:26.209 Test: blockdev nvme passthru rw ...passed 00:11:26.209 Test: blockdev nvme passthru vendor specific ...passed 00:11:26.209 Test: blockdev nvme admin passthru ...[2024-11-26 20:34:40.671535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.671544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.209 [2024-11-26 20:34:40.671549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.672086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.209 [2024-11-26 20:34:40.672094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.672167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.209 [2024-11-26 20:34:40.672172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.672235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.209 [2024-11-26 20:34:40.672240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:26.209 [2024-11-26 20:34:40.672298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.209 [2024-11-26 20:34:40.672303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:26.209 passed 00:11:26.209 Test: blockdev copy ...passed 00:11:26.209 00:11:26.209 Run Summary: Type Total Ran Passed Failed Inactive 00:11:26.210 suites 1 1 n/a 0 0 00:11:26.210 tests 23 23 23 0 0 00:11:26.210 asserts 152 152 152 0 n/a 00:11:26.210 00:11:26.210 Elapsed time = 0.145 seconds 00:11:26.466 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.466 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.466 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:26.466 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.467 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:26.467 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:11:26.467 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.467 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.723 rmmod nvme_tcp 00:11:26.723 rmmod nvme_fabrics 00:11:26.723 rmmod nvme_keyring 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 69961 ']' 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 69961 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 69961 ']' 00:11:26.723 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 69961 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69961 00:11:26.724 killing process with pid 69961 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69961' 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 69961 00:11:26.724 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 69961 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:26.982 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:11:27.240 00:11:27.240 real 0m2.891s 00:11:27.240 user 0m8.937s 00:11:27.240 sys 0m1.084s 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.240 ************************************ 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:27.240 END TEST nvmf_bdevio_no_huge 00:11:27.240 ************************************ 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.240 ************************************ 00:11:27.240 START TEST nvmf_tls 00:11:27.240 ************************************ 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:27.240 * Looking for test storage... 00:11:27.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.240 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.499 --rc genhtml_branch_coverage=1 00:11:27.499 --rc genhtml_function_coverage=1 00:11:27.499 --rc genhtml_legend=1 00:11:27.499 --rc geninfo_all_blocks=1 00:11:27.499 --rc geninfo_unexecuted_blocks=1 00:11:27.499 00:11:27.499 ' 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.499 --rc genhtml_branch_coverage=1 00:11:27.499 --rc genhtml_function_coverage=1 00:11:27.499 --rc genhtml_legend=1 00:11:27.499 --rc geninfo_all_blocks=1 00:11:27.499 --rc geninfo_unexecuted_blocks=1 00:11:27.499 00:11:27.499 ' 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.499 --rc genhtml_branch_coverage=1 00:11:27.499 --rc genhtml_function_coverage=1 00:11:27.499 --rc genhtml_legend=1 00:11:27.499 --rc geninfo_all_blocks=1 00:11:27.499 --rc geninfo_unexecuted_blocks=1 00:11:27.499 00:11:27.499 ' 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.499 --rc genhtml_branch_coverage=1 00:11:27.499 --rc genhtml_function_coverage=1 00:11:27.499 --rc genhtml_legend=1 00:11:27.499 --rc geninfo_all_blocks=1 00:11:27.499 --rc geninfo_unexecuted_blocks=1 00:11:27.499 00:11:27.499 ' 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.499 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.500 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:27.500 Cannot find device "nvmf_init_br" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:27.500 Cannot find device "nvmf_init_br2" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:27.500 Cannot find device "nvmf_tgt_br" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.500 Cannot find device "nvmf_tgt_br2" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:27.500 Cannot find device "nvmf_init_br" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:27.500 Cannot find device "nvmf_init_br2" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:27.500 Cannot find device "nvmf_tgt_br" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:27.500 Cannot find device "nvmf_tgt_br2" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:27.500 Cannot find device "nvmf_br" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:27.500 Cannot find device "nvmf_init_if" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:27.500 Cannot find device "nvmf_init_if2" 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:11:27.500 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.501 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.501 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:27.501 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.501 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.501 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.501 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:27.501 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:27.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:27.759 00:11:27.759 --- 10.0.0.3 ping statistics --- 00:11:27.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.759 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:27.759 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:27.759 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:11:27.759 00:11:27.759 --- 10.0.0.4 ping statistics --- 00:11:27.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.759 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:27.759 00:11:27.759 --- 10.0.0.1 ping statistics --- 00:11:27.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.759 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:27.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:27.759 00:11:27.759 --- 10.0.0.2 ping statistics --- 00:11:27.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.759 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.759 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70221 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70221 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70221 ']' 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.760 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:27.760 [2024-11-26 20:34:42.162012] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:27.760 [2024-11-26 20:34:42.162086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.760 [2024-11-26 20:34:42.297330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.018 [2024-11-26 20:34:42.333719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.018 [2024-11-26 20:34:42.333765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.018 [2024-11-26 20:34:42.333772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.018 [2024-11-26 20:34:42.333777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.018 [2024-11-26 20:34:42.333781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.018 [2024-11-26 20:34:42.334047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.586 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.586 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:28.586 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.586 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.586 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.586 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:11:28.586 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:28.844 true 00:11:28.844 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:11:28.844 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:28.844 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:11:28.844 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:11:28.844 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:29.157 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:29.157 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:11:29.431 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:11:29.431 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:11:29.431 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:29.431 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:29.431 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:11:29.689 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:11:29.689 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:11:29.689 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:29.689 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:11:29.946 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:11:29.946 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:11:29.946 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:30.204 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:30.204 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:30.204 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:11:30.204 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:11:30.204 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:30.462 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:30.462 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.4bX1hglka6 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.jdIyaxSojI 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.4bX1hglka6 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.jdIyaxSojI 00:11:30.720 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:30.979 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:31.237 [2024-11-26 20:34:45.601979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.237 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.4bX1hglka6 00:11:31.237 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4bX1hglka6 00:11:31.237 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:31.495 [2024-11-26 20:34:45.838068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.495 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:31.753 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:11:31.753 [2024-11-26 20:34:46.270177] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:31.753 [2024-11-26 20:34:46.270387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:31.753 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:32.010 malloc0 00:11:32.010 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:32.327 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4bX1hglka6 00:11:32.585 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:11:32.585 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.4bX1hglka6 00:11:44.779 Initializing NVMe Controllers 00:11:44.779 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:44.779 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:44.779 Initialization complete. Launching workers. 00:11:44.779 ======================================================== 00:11:44.779 Latency(us) 00:11:44.779 Device Information : IOPS MiB/s Average min max 00:11:44.779 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17785.99 69.48 3598.58 1101.34 12369.89 00:11:44.779 ======================================================== 00:11:44.780 Total : 17785.99 69.48 3598.58 1101.34 12369.89 00:11:44.780 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4bX1hglka6 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4bX1hglka6 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70452 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70452 /var/tmp/bdevperf.sock 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70452 ']' 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.780 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:44.780 [2024-11-26 20:34:57.302159] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:44.780 [2024-11-26 20:34:57.302212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70452 ] 00:11:44.780 [2024-11-26 20:34:57.437057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.780 [2024-11-26 20:34:57.474055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.780 [2024-11-26 20:34:57.506294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:44.780 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.780 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:44.780 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4bX1hglka6 00:11:44.780 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:44.780 [2024-11-26 20:34:58.607495] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:44.780 TLSTESTn1 00:11:44.780 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:44.780 Running I/O for 10 seconds... 00:11:46.292 6346.00 IOPS, 24.79 MiB/s [2024-11-26T20:35:01.789Z] 6645.50 IOPS, 25.96 MiB/s [2024-11-26T20:35:03.173Z] 6791.67 IOPS, 26.53 MiB/s [2024-11-26T20:35:04.106Z] 6840.75 IOPS, 26.72 MiB/s [2024-11-26T20:35:05.041Z] 6869.20 IOPS, 26.83 MiB/s [2024-11-26T20:35:05.977Z] 6915.83 IOPS, 27.01 MiB/s [2024-11-26T20:35:06.922Z] 6945.86 IOPS, 27.13 MiB/s [2024-11-26T20:35:07.862Z] 6965.50 IOPS, 27.21 MiB/s [2024-11-26T20:35:08.805Z] 6979.56 IOPS, 27.26 MiB/s [2024-11-26T20:35:08.805Z] 6997.20 IOPS, 27.33 MiB/s 00:11:54.250 Latency(us) 00:11:54.250 [2024-11-26T20:35:08.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.250 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:54.250 Verification LBA range: start 0x0 length 0x2000 00:11:54.250 TLSTESTn1 : 10.01 7003.06 27.36 0.00 0.00 18248.19 3478.45 19459.15 00:11:54.250 [2024-11-26T20:35:08.805Z] =================================================================================================================== 00:11:54.250 [2024-11-26T20:35:08.805Z] Total : 7003.06 27.36 0.00 0.00 18248.19 3478.45 19459.15 00:11:54.250 { 00:11:54.250 "results": [ 00:11:54.250 { 00:11:54.250 "job": "TLSTESTn1", 00:11:54.250 "core_mask": "0x4", 00:11:54.250 "workload": "verify", 00:11:54.250 "status": "finished", 00:11:54.250 "verify_range": { 00:11:54.250 "start": 0, 00:11:54.250 "length": 8192 00:11:54.250 }, 00:11:54.250 "queue_depth": 128, 00:11:54.250 "io_size": 4096, 00:11:54.250 "runtime": 10.009201, 00:11:54.250 "iops": 7003.056487725644, 00:11:54.250 "mibps": 27.355689405178296, 00:11:54.250 "io_failed": 0, 00:11:54.250 "io_timeout": 0, 00:11:54.250 "avg_latency_us": 18248.189170960293, 00:11:54.250 "min_latency_us": 3478.449230769231, 00:11:54.250 "max_latency_us": 19459.150769230768 00:11:54.250 } 00:11:54.250 ], 00:11:54.250 "core_count": 1 00:11:54.250 } 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70452 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70452 ']' 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70452 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70452 00:11:54.510 killing process with pid 70452 00:11:54.510 Received shutdown signal, test time was about 10.000000 seconds 00:11:54.510 00:11:54.510 Latency(us) 00:11:54.510 [2024-11-26T20:35:09.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.510 [2024-11-26T20:35:09.065Z] =================================================================================================================== 00:11:54.510 [2024-11-26T20:35:09.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70452' 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70452 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70452 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jdIyaxSojI 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jdIyaxSojI 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:54.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jdIyaxSojI 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:54.510 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jdIyaxSojI 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70588 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70588 /var/tmp/bdevperf.sock 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70588 ']' 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.511 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:54.511 [2024-11-26 20:35:08.974082] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:54.511 [2024-11-26 20:35:08.974287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70588 ] 00:11:54.771 [2024-11-26 20:35:09.111059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.771 [2024-11-26 20:35:09.143284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.771 [2024-11-26 20:35:09.172393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.341 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.341 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:55.341 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jdIyaxSojI 00:11:55.602 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:55.863 [2024-11-26 20:35:10.212360] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:55.863 [2024-11-26 20:35:10.216649] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:55.863 [2024-11-26 20:35:10.217183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb2ff0 (107): Transport endpoint is not connected 00:11:55.863 [2024-11-26 20:35:10.218177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb2ff0 (9): Bad file descriptor 00:11:55.863 [2024-11-26 20:35:10.219175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:11:55.863 [2024-11-26 20:35:10.219237] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:11:55.863 [2024-11-26 20:35:10.219276] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:11:55.863 [2024-11-26 20:35:10.219303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:11:55.863 request: 00:11:55.863 { 00:11:55.863 "name": "TLSTEST", 00:11:55.863 "trtype": "tcp", 00:11:55.863 "traddr": "10.0.0.3", 00:11:55.863 "adrfam": "ipv4", 00:11:55.863 "trsvcid": "4420", 00:11:55.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:55.863 "prchk_reftag": false, 00:11:55.863 "prchk_guard": false, 00:11:55.863 "hdgst": false, 00:11:55.863 "ddgst": false, 00:11:55.863 "psk": "key0", 00:11:55.863 "allow_unrecognized_csi": false, 00:11:55.863 "method": "bdev_nvme_attach_controller", 00:11:55.863 "req_id": 1 00:11:55.863 } 00:11:55.863 Got JSON-RPC error response 00:11:55.863 response: 00:11:55.863 { 00:11:55.863 "code": -5, 00:11:55.863 "message": "Input/output error" 00:11:55.863 } 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70588 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70588 ']' 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70588 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70588 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70588' 00:11:55.863 killing process with pid 70588 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70588 00:11:55.863 Received shutdown signal, test time was about 10.000000 seconds 00:11:55.863 00:11:55.863 Latency(us) 00:11:55.863 [2024-11-26T20:35:10.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.863 [2024-11-26T20:35:10.418Z] =================================================================================================================== 00:11:55.863 [2024-11-26T20:35:10.418Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70588 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:11:55.863 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4bX1hglka6 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4bX1hglka6 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4bX1hglka6 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4bX1hglka6 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70616 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70616 /var/tmp/bdevperf.sock 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70616 ']' 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.864 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:55.864 [2024-11-26 20:35:10.389096] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:55.864 [2024-11-26 20:35:10.389275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70616 ] 00:11:56.125 [2024-11-26 20:35:10.524548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.125 [2024-11-26 20:35:10.556849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.125 [2024-11-26 20:35:10.587427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.070 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.070 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:57.070 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4bX1hglka6 00:11:57.070 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:11:57.336 [2024-11-26 20:35:11.647023] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:57.336 [2024-11-26 20:35:11.656719] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:57.336 [2024-11-26 20:35:11.656751] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:57.336 [2024-11-26 20:35:11.656781] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:57.336 [2024-11-26 20:35:11.656801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b33ff0 (107): Transport endpoint is not connected 00:11:57.336 [2024-11-26 20:35:11.657792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b33ff0 (9): Bad file descriptor 00:11:57.336 [2024-11-26 20:35:11.658792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:11:57.336 [2024-11-26 20:35:11.658805] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:11:57.336 [2024-11-26 20:35:11.658811] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:11:57.336 [2024-11-26 20:35:11.658819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:11:57.336 request: 00:11:57.336 { 00:11:57.336 "name": "TLSTEST", 00:11:57.336 "trtype": "tcp", 00:11:57.336 "traddr": "10.0.0.3", 00:11:57.336 "adrfam": "ipv4", 00:11:57.336 "trsvcid": "4420", 00:11:57.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:57.336 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:57.336 "prchk_reftag": false, 00:11:57.336 "prchk_guard": false, 00:11:57.336 "hdgst": false, 00:11:57.336 "ddgst": false, 00:11:57.336 "psk": "key0", 00:11:57.336 "allow_unrecognized_csi": false, 00:11:57.336 "method": "bdev_nvme_attach_controller", 00:11:57.336 "req_id": 1 00:11:57.336 } 00:11:57.336 Got JSON-RPC error response 00:11:57.336 response: 00:11:57.336 { 00:11:57.336 "code": -5, 00:11:57.336 "message": "Input/output error" 00:11:57.336 } 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70616 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70616 ']' 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70616 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70616 00:11:57.336 killing process with pid 70616 00:11:57.336 Received shutdown signal, test time was about 10.000000 seconds 00:11:57.336 00:11:57.336 Latency(us) 00:11:57.336 [2024-11-26T20:35:11.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.336 [2024-11-26T20:35:11.891Z] =================================================================================================================== 00:11:57.336 [2024-11-26T20:35:11.891Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70616' 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70616 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70616 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4bX1hglka6 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4bX1hglka6 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:57.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4bX1hglka6 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4bX1hglka6 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70639 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70639 /var/tmp/bdevperf.sock 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70639 ']' 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:57.336 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:57.336 [2024-11-26 20:35:11.846160] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:57.336 [2024-11-26 20:35:11.846331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70639 ] 00:11:57.599 [2024-11-26 20:35:11.984017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.599 [2024-11-26 20:35:12.020730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.599 [2024-11-26 20:35:12.051785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:58.171 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.171 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:58.171 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4bX1hglka6 00:11:58.435 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:58.695 [2024-11-26 20:35:13.108112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:58.695 [2024-11-26 20:35:13.116850] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:58.695 [2024-11-26 20:35:13.116976] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:58.695 [2024-11-26 20:35:13.117057] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:58.695 [2024-11-26 20:35:13.117461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x679ff0 (107): Transport endpoint is not connected 00:11:58.695 [2024-11-26 20:35:13.118453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x679ff0 (9): Bad file descriptor 00:11:58.695 [2024-11-26 20:35:13.119451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:11:58.695 [2024-11-26 20:35:13.119533] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:11:58.695 [2024-11-26 20:35:13.119580] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:11:58.695 [2024-11-26 20:35:13.119661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:11:58.695 request: 00:11:58.695 { 00:11:58.695 "name": "TLSTEST", 00:11:58.695 "trtype": "tcp", 00:11:58.695 "traddr": "10.0.0.3", 00:11:58.695 "adrfam": "ipv4", 00:11:58.695 "trsvcid": "4420", 00:11:58.695 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:58.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.695 "prchk_reftag": false, 00:11:58.695 "prchk_guard": false, 00:11:58.695 "hdgst": false, 00:11:58.695 "ddgst": false, 00:11:58.695 "psk": "key0", 00:11:58.695 "allow_unrecognized_csi": false, 00:11:58.695 "method": "bdev_nvme_attach_controller", 00:11:58.695 "req_id": 1 00:11:58.696 } 00:11:58.696 Got JSON-RPC error response 00:11:58.696 response: 00:11:58.696 { 00:11:58.696 "code": -5, 00:11:58.696 "message": "Input/output error" 00:11:58.696 } 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70639 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70639 ']' 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70639 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70639 00:11:58.696 killing process with pid 70639 00:11:58.696 Received shutdown signal, test time was about 10.000000 seconds 00:11:58.696 00:11:58.696 Latency(us) 00:11:58.696 [2024-11-26T20:35:13.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.696 [2024-11-26T20:35:13.251Z] =================================================================================================================== 00:11:58.696 [2024-11-26T20:35:13.251Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70639' 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70639 00:11:58.696 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70639 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:58.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70668 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70668 /var/tmp/bdevperf.sock 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70668 ']' 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.956 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 [2024-11-26 20:35:13.315200] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:11:58.956 [2024-11-26 20:35:13.315474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70668 ] 00:11:58.956 [2024-11-26 20:35:13.452816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.956 [2024-11-26 20:35:13.505432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.218 [2024-11-26 20:35:13.541574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:59.790 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.790 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:59.790 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:12:00.052 [2024-11-26 20:35:14.413587] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:12:00.052 [2024-11-26 20:35:14.413789] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:00.052 request: 00:12:00.052 { 00:12:00.052 "name": "key0", 00:12:00.052 "path": "", 00:12:00.052 "method": "keyring_file_add_key", 00:12:00.052 "req_id": 1 00:12:00.052 } 00:12:00.052 Got JSON-RPC error response 00:12:00.052 response: 00:12:00.052 { 00:12:00.052 "code": -1, 00:12:00.052 "message": "Operation not permitted" 00:12:00.052 } 00:12:00.052 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:00.314 [2024-11-26 20:35:14.625741] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:00.314 [2024-11-26 20:35:14.625896] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:00.314 request: 00:12:00.314 { 00:12:00.314 "name": "TLSTEST", 00:12:00.314 "trtype": "tcp", 00:12:00.314 "traddr": "10.0.0.3", 00:12:00.314 "adrfam": "ipv4", 00:12:00.314 "trsvcid": "4420", 00:12:00.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:00.314 "prchk_reftag": false, 00:12:00.314 "prchk_guard": false, 00:12:00.314 "hdgst": false, 00:12:00.314 "ddgst": false, 00:12:00.314 "psk": "key0", 00:12:00.314 "allow_unrecognized_csi": false, 00:12:00.314 "method": "bdev_nvme_attach_controller", 00:12:00.314 "req_id": 1 00:12:00.314 } 00:12:00.314 Got JSON-RPC error response 00:12:00.314 response: 00:12:00.314 { 00:12:00.314 "code": -126, 00:12:00.314 "message": "Required key not available" 00:12:00.314 } 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70668 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70668 ']' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70668 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70668 00:12:00.314 killing process with pid 70668 00:12:00.314 Received shutdown signal, test time was about 10.000000 seconds 00:12:00.314 00:12:00.314 Latency(us) 00:12:00.314 [2024-11-26T20:35:14.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.314 [2024-11-26T20:35:14.869Z] =================================================================================================================== 00:12:00.314 [2024-11-26T20:35:14.869Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70668' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70668 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70668 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70221 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70221 ']' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70221 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70221 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:00.314 killing process with pid 70221 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70221' 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70221 00:12:00.314 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70221 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.JJQPdU3TEO 00:12:00.576 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.JJQPdU3TEO 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:00.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70712 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70712 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.577 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:00.577 [2024-11-26 20:35:15.000898] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:00.577 [2024-11-26 20:35:15.000951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.844 [2024-11-26 20:35:15.138599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.844 [2024-11-26 20:35:15.170369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.844 [2024-11-26 20:35:15.170516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.844 [2024-11-26 20:35:15.170564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.844 [2024-11-26 20:35:15.170586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.844 [2024-11-26 20:35:15.170614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.844 [2024-11-26 20:35:15.170836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.844 [2024-11-26 20:35:15.201383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:01.418 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.JJQPdU3TEO 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JJQPdU3TEO 00:12:01.419 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:01.680 [2024-11-26 20:35:16.113656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.680 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:01.941 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:01.941 [2024-11-26 20:35:16.493712] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:01.941 [2024-11-26 20:35:16.493856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:02.203 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:02.203 malloc0 00:12:02.203 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:02.465 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:02.725 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:02.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JJQPdU3TEO 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JJQPdU3TEO 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70762 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70762 /var/tmp/bdevperf.sock 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70762 ']' 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.986 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:02.986 [2024-11-26 20:35:17.343406] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:02.986 [2024-11-26 20:35:17.343631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:12:02.986 [2024-11-26 20:35:17.482130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.986 [2024-11-26 20:35:17.519295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.247 [2024-11-26 20:35:17.551889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:03.818 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.818 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:03.818 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:04.076 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:04.337 [2024-11-26 20:35:18.657338] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:04.337 TLSTESTn1 00:12:04.337 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:04.337 Running I/O for 10 seconds... 00:12:06.664 6342.00 IOPS, 24.77 MiB/s [2024-11-26T20:35:22.162Z] 6431.50 IOPS, 25.12 MiB/s [2024-11-26T20:35:23.106Z] 6648.00 IOPS, 25.97 MiB/s [2024-11-26T20:35:24.048Z] 6764.25 IOPS, 26.42 MiB/s [2024-11-26T20:35:25.013Z] 6820.00 IOPS, 26.64 MiB/s [2024-11-26T20:35:25.956Z] 6874.33 IOPS, 26.85 MiB/s [2024-11-26T20:35:26.902Z] 6904.14 IOPS, 26.97 MiB/s [2024-11-26T20:35:27.848Z] 6934.12 IOPS, 27.09 MiB/s [2024-11-26T20:35:29.234Z] 6960.78 IOPS, 27.19 MiB/s [2024-11-26T20:35:29.234Z] 6978.50 IOPS, 27.26 MiB/s 00:12:14.679 Latency(us) 00:12:14.679 [2024-11-26T20:35:29.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.679 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:14.679 Verification LBA range: start 0x0 length 0x2000 00:12:14.679 TLSTESTn1 : 10.01 6984.64 27.28 0.00 0.00 18296.58 3276.80 17442.66 00:12:14.679 [2024-11-26T20:35:29.234Z] =================================================================================================================== 00:12:14.679 [2024-11-26T20:35:29.234Z] Total : 6984.64 27.28 0.00 0.00 18296.58 3276.80 17442.66 00:12:14.679 { 00:12:14.679 "results": [ 00:12:14.679 { 00:12:14.679 "job": "TLSTESTn1", 00:12:14.679 "core_mask": "0x4", 00:12:14.679 "workload": "verify", 00:12:14.679 "status": "finished", 00:12:14.679 "verify_range": { 00:12:14.679 "start": 0, 00:12:14.679 "length": 8192 00:12:14.679 }, 00:12:14.679 "queue_depth": 128, 00:12:14.680 "io_size": 4096, 00:12:14.680 "runtime": 10.009251, 00:12:14.680 "iops": 6984.638510913554, 00:12:14.680 "mibps": 27.28374418325607, 00:12:14.680 "io_failed": 0, 00:12:14.680 "io_timeout": 0, 00:12:14.680 "avg_latency_us": 18296.583398804854, 00:12:14.680 "min_latency_us": 3276.8, 00:12:14.680 "max_latency_us": 17442.65846153846 00:12:14.680 } 00:12:14.680 ], 00:12:14.680 "core_count": 1 00:12:14.680 } 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70762 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70762 ']' 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70762 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70762 00:12:14.680 killing process with pid 70762 00:12:14.680 Received shutdown signal, test time was about 10.000000 seconds 00:12:14.680 00:12:14.680 Latency(us) 00:12:14.680 [2024-11-26T20:35:29.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.680 [2024-11-26T20:35:29.235Z] =================================================================================================================== 00:12:14.680 [2024-11-26T20:35:29.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70762' 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70762 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70762 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.JJQPdU3TEO 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JJQPdU3TEO 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JJQPdU3TEO 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:14.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JJQPdU3TEO 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JJQPdU3TEO 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70897 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70897 /var/tmp/bdevperf.sock 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70897 ']' 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:14.680 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:14.680 [2024-11-26 20:35:29.025815] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:14.680 [2024-11-26 20:35:29.025881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:12:14.680 [2024-11-26 20:35:29.161444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.680 [2024-11-26 20:35:29.197348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.680 [2024-11-26 20:35:29.228668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.685 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.685 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:15.685 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:15.685 [2024-11-26 20:35:30.125189] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JJQPdU3TEO': 0100666 00:12:15.685 [2024-11-26 20:35:30.125376] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:15.685 request: 00:12:15.685 { 00:12:15.685 "name": "key0", 00:12:15.685 "path": "/tmp/tmp.JJQPdU3TEO", 00:12:15.685 "method": "keyring_file_add_key", 00:12:15.685 "req_id": 1 00:12:15.685 } 00:12:15.685 Got JSON-RPC error response 00:12:15.685 response: 00:12:15.685 { 00:12:15.685 "code": -1, 00:12:15.685 "message": "Operation not permitted" 00:12:15.685 } 00:12:15.685 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:15.944 [2024-11-26 20:35:30.329326] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:15.944 [2024-11-26 20:35:30.329491] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:15.944 request: 00:12:15.944 { 00:12:15.944 "name": "TLSTEST", 00:12:15.944 "trtype": "tcp", 00:12:15.944 "traddr": "10.0.0.3", 00:12:15.944 "adrfam": "ipv4", 00:12:15.944 "trsvcid": "4420", 00:12:15.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.944 "prchk_reftag": false, 00:12:15.944 "prchk_guard": false, 00:12:15.944 "hdgst": false, 00:12:15.944 "ddgst": false, 00:12:15.944 "psk": "key0", 00:12:15.944 "allow_unrecognized_csi": false, 00:12:15.944 "method": "bdev_nvme_attach_controller", 00:12:15.944 "req_id": 1 00:12:15.944 } 00:12:15.944 Got JSON-RPC error response 00:12:15.944 response: 00:12:15.944 { 00:12:15.944 "code": -126, 00:12:15.944 "message": "Required key not available" 00:12:15.944 } 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70897 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70897 ']' 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70897 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70897 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70897' 00:12:15.944 killing process with pid 70897 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70897 00:12:15.944 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.944 00:12:15.944 Latency(us) 00:12:15.944 [2024-11-26T20:35:30.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.944 [2024-11-26T20:35:30.499Z] =================================================================================================================== 00:12:15.944 [2024-11-26T20:35:30.499Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70897 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 70712 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70712 ']' 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70712 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.944 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70712 00:12:16.203 killing process with pid 70712 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70712' 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70712 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70712 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70931 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70931 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70931 ']' 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.203 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:16.203 [2024-11-26 20:35:30.666867] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:16.203 [2024-11-26 20:35:30.667074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.462 [2024-11-26 20:35:30.802276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.462 [2024-11-26 20:35:30.837401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.462 [2024-11-26 20:35:30.837569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.462 [2024-11-26 20:35:30.837653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.462 [2024-11-26 20:35:30.837661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.462 [2024-11-26 20:35:30.837665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.462 [2024-11-26 20:35:30.837923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.462 [2024-11-26 20:35:30.869604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.JJQPdU3TEO 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JJQPdU3TEO 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.JJQPdU3TEO 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JJQPdU3TEO 00:12:17.028 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:17.286 [2024-11-26 20:35:31.759470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.286 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:17.544 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:17.801 [2024-11-26 20:35:32.167538] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:17.801 [2024-11-26 20:35:32.167722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:17.801 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:18.058 malloc0 00:12:18.058 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:18.058 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:18.316 [2024-11-26 20:35:32.790375] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JJQPdU3TEO': 0100666 00:12:18.316 [2024-11-26 20:35:32.790412] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:18.316 request: 00:12:18.316 { 00:12:18.316 "name": "key0", 00:12:18.316 "path": "/tmp/tmp.JJQPdU3TEO", 00:12:18.316 "method": "keyring_file_add_key", 00:12:18.316 "req_id": 1 00:12:18.316 } 00:12:18.316 Got JSON-RPC error response 00:12:18.316 response: 00:12:18.316 { 00:12:18.316 "code": -1, 00:12:18.316 "message": "Operation not permitted" 00:12:18.316 } 00:12:18.316 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:18.575 [2024-11-26 20:35:32.998440] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:12:18.575 [2024-11-26 20:35:32.998496] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:18.575 request: 00:12:18.575 { 00:12:18.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:18.575 "host": "nqn.2016-06.io.spdk:host1", 00:12:18.575 "psk": "key0", 00:12:18.575 "method": "nvmf_subsystem_add_host", 00:12:18.575 "req_id": 1 00:12:18.575 } 00:12:18.575 Got JSON-RPC error response 00:12:18.575 response: 00:12:18.575 { 00:12:18.575 "code": -32603, 00:12:18.575 "message": "Internal error" 00:12:18.575 } 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 70931 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70931 ']' 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70931 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70931 00:12:18.575 killing process with pid 70931 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70931' 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70931 00:12:18.575 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70931 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.JJQPdU3TEO 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:18.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70994 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70994 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70994 ']' 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.836 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:18.836 [2024-11-26 20:35:33.191920] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:18.836 [2024-11-26 20:35:33.192098] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.836 [2024-11-26 20:35:33.324563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.836 [2024-11-26 20:35:33.356139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.836 [2024-11-26 20:35:33.356181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.836 [2024-11-26 20:35:33.356187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.836 [2024-11-26 20:35:33.356191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.836 [2024-11-26 20:35:33.356195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.836 [2024-11-26 20:35:33.356417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.836 [2024-11-26 20:35:33.385761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:19.768 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.768 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:19.769 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.769 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.769 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:19.769 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.769 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.JJQPdU3TEO 00:12:19.769 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JJQPdU3TEO 00:12:19.769 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:20.027 [2024-11-26 20:35:34.358762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.027 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:20.284 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:20.284 [2024-11-26 20:35:34.766811] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:20.284 [2024-11-26 20:35:34.767088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:20.284 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:20.543 malloc0 00:12:20.543 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:20.802 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71050 00:12:21.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71050 /var/tmp/bdevperf.sock 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71050 ']' 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.064 20:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:21.322 [2024-11-26 20:35:35.642793] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:21.322 [2024-11-26 20:35:35.642851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71050 ] 00:12:21.322 [2024-11-26 20:35:35.783332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.322 [2024-11-26 20:35:35.819623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.322 [2024-11-26 20:35:35.850367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:22.260 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.260 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:22.260 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:22.260 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:22.517 [2024-11-26 20:35:36.907886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:22.517 TLSTESTn1 00:12:22.517 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:22.775 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:12:22.775 "subsystems": [ 00:12:22.775 { 00:12:22.775 "subsystem": "keyring", 00:12:22.775 "config": [ 00:12:22.775 { 00:12:22.775 "method": "keyring_file_add_key", 00:12:22.775 "params": { 00:12:22.775 "name": "key0", 00:12:22.775 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:22.775 } 00:12:22.775 } 00:12:22.775 ] 00:12:22.775 }, 00:12:22.775 { 00:12:22.775 "subsystem": "iobuf", 00:12:22.775 "config": [ 00:12:22.775 { 00:12:22.775 "method": "iobuf_set_options", 00:12:22.775 "params": { 00:12:22.775 "small_pool_count": 8192, 00:12:22.775 "large_pool_count": 1024, 00:12:22.775 "small_bufsize": 8192, 00:12:22.775 "large_bufsize": 135168, 00:12:22.775 "enable_numa": false 00:12:22.775 } 00:12:22.775 } 00:12:22.775 ] 00:12:22.775 }, 00:12:22.775 { 00:12:22.775 "subsystem": "sock", 00:12:22.775 "config": [ 00:12:22.775 { 00:12:22.775 "method": "sock_set_default_impl", 00:12:22.775 "params": { 00:12:22.775 "impl_name": "uring" 00:12:22.775 } 00:12:22.775 }, 00:12:22.775 { 00:12:22.775 "method": "sock_impl_set_options", 00:12:22.775 "params": { 00:12:22.775 "impl_name": "ssl", 00:12:22.775 "recv_buf_size": 4096, 00:12:22.775 "send_buf_size": 4096, 00:12:22.775 "enable_recv_pipe": true, 00:12:22.775 "enable_quickack": false, 00:12:22.775 "enable_placement_id": 0, 00:12:22.775 "enable_zerocopy_send_server": true, 00:12:22.775 "enable_zerocopy_send_client": false, 00:12:22.775 "zerocopy_threshold": 0, 00:12:22.775 "tls_version": 0, 00:12:22.775 "enable_ktls": false 00:12:22.775 } 00:12:22.775 }, 00:12:22.775 { 00:12:22.775 "method": "sock_impl_set_options", 00:12:22.775 "params": { 00:12:22.775 "impl_name": "posix", 00:12:22.775 "recv_buf_size": 2097152, 00:12:22.775 "send_buf_size": 2097152, 00:12:22.775 "enable_recv_pipe": true, 00:12:22.775 "enable_quickack": false, 00:12:22.775 "enable_placement_id": 0, 00:12:22.775 "enable_zerocopy_send_server": true, 00:12:22.775 "enable_zerocopy_send_client": false, 00:12:22.775 "zerocopy_threshold": 0, 00:12:22.775 "tls_version": 0, 00:12:22.775 "enable_ktls": false 00:12:22.775 } 00:12:22.775 }, 00:12:22.775 { 00:12:22.775 "method": "sock_impl_set_options", 00:12:22.775 "params": { 00:12:22.775 "impl_name": "uring", 00:12:22.775 "recv_buf_size": 2097152, 00:12:22.775 "send_buf_size": 2097152, 00:12:22.775 "enable_recv_pipe": true, 00:12:22.775 "enable_quickack": false, 00:12:22.775 "enable_placement_id": 0, 00:12:22.775 "enable_zerocopy_send_server": false, 00:12:22.775 "enable_zerocopy_send_client": false, 00:12:22.775 "zerocopy_threshold": 0, 00:12:22.775 "tls_version": 0, 00:12:22.775 "enable_ktls": false 00:12:22.775 } 00:12:22.775 } 00:12:22.775 ] 00:12:22.775 }, 00:12:22.775 { 00:12:22.775 "subsystem": "vmd", 00:12:22.776 "config": [] 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "subsystem": "accel", 00:12:22.776 "config": [ 00:12:22.776 { 00:12:22.776 "method": "accel_set_options", 00:12:22.776 "params": { 00:12:22.776 "small_cache_size": 128, 00:12:22.776 "large_cache_size": 16, 00:12:22.776 "task_count": 2048, 00:12:22.776 "sequence_count": 2048, 00:12:22.776 "buf_count": 2048 00:12:22.776 } 00:12:22.776 } 00:12:22.776 ] 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "subsystem": "bdev", 00:12:22.776 "config": [ 00:12:22.776 { 00:12:22.776 "method": "bdev_set_options", 00:12:22.776 "params": { 00:12:22.776 "bdev_io_pool_size": 65535, 00:12:22.776 "bdev_io_cache_size": 256, 00:12:22.776 "bdev_auto_examine": true, 00:12:22.776 "iobuf_small_cache_size": 128, 00:12:22.776 "iobuf_large_cache_size": 16 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "bdev_raid_set_options", 00:12:22.776 "params": { 00:12:22.776 "process_window_size_kb": 1024, 00:12:22.776 "process_max_bandwidth_mb_sec": 0 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "bdev_iscsi_set_options", 00:12:22.776 "params": { 00:12:22.776 "timeout_sec": 30 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "bdev_nvme_set_options", 00:12:22.776 "params": { 00:12:22.776 "action_on_timeout": "none", 00:12:22.776 "timeout_us": 0, 00:12:22.776 "timeout_admin_us": 0, 00:12:22.776 "keep_alive_timeout_ms": 10000, 00:12:22.776 "arbitration_burst": 0, 00:12:22.776 "low_priority_weight": 0, 00:12:22.776 "medium_priority_weight": 0, 00:12:22.776 "high_priority_weight": 0, 00:12:22.776 "nvme_adminq_poll_period_us": 10000, 00:12:22.776 "nvme_ioq_poll_period_us": 0, 00:12:22.776 "io_queue_requests": 0, 00:12:22.776 "delay_cmd_submit": true, 00:12:22.776 "transport_retry_count": 4, 00:12:22.776 "bdev_retry_count": 3, 00:12:22.776 "transport_ack_timeout": 0, 00:12:22.776 "ctrlr_loss_timeout_sec": 0, 00:12:22.776 "reconnect_delay_sec": 0, 00:12:22.776 "fast_io_fail_timeout_sec": 0, 00:12:22.776 "disable_auto_failback": false, 00:12:22.776 "generate_uuids": false, 00:12:22.776 "transport_tos": 0, 00:12:22.776 "nvme_error_stat": false, 00:12:22.776 "rdma_srq_size": 0, 00:12:22.776 "io_path_stat": false, 00:12:22.776 "allow_accel_sequence": false, 00:12:22.776 "rdma_max_cq_size": 0, 00:12:22.776 "rdma_cm_event_timeout_ms": 0, 00:12:22.776 "dhchap_digests": [ 00:12:22.776 "sha256", 00:12:22.776 "sha384", 00:12:22.776 "sha512" 00:12:22.776 ], 00:12:22.776 "dhchap_dhgroups": [ 00:12:22.776 "null", 00:12:22.776 "ffdhe2048", 00:12:22.776 "ffdhe3072", 00:12:22.776 "ffdhe4096", 00:12:22.776 "ffdhe6144", 00:12:22.776 "ffdhe8192" 00:12:22.776 ] 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "bdev_nvme_set_hotplug", 00:12:22.776 "params": { 00:12:22.776 "period_us": 100000, 00:12:22.776 "enable": false 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "bdev_malloc_create", 00:12:22.776 "params": { 00:12:22.776 "name": "malloc0", 00:12:22.776 "num_blocks": 8192, 00:12:22.776 "block_size": 4096, 00:12:22.776 "physical_block_size": 4096, 00:12:22.776 "uuid": "327b7d9f-fea1-49b0-9289-b2e4074de179", 00:12:22.776 "optimal_io_boundary": 0, 00:12:22.776 "md_size": 0, 00:12:22.776 "dif_type": 0, 00:12:22.776 "dif_is_head_of_md": false, 00:12:22.776 "dif_pi_format": 0 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "bdev_wait_for_examine" 00:12:22.776 } 00:12:22.776 ] 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "subsystem": "nbd", 00:12:22.776 "config": [] 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "subsystem": "scheduler", 00:12:22.776 "config": [ 00:12:22.776 { 00:12:22.776 "method": "framework_set_scheduler", 00:12:22.776 "params": { 00:12:22.776 "name": "static" 00:12:22.776 } 00:12:22.776 } 00:12:22.776 ] 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "subsystem": "nvmf", 00:12:22.776 "config": [ 00:12:22.776 { 00:12:22.776 "method": "nvmf_set_config", 00:12:22.776 "params": { 00:12:22.776 "discovery_filter": "match_any", 00:12:22.776 "admin_cmd_passthru": { 00:12:22.776 "identify_ctrlr": false 00:12:22.776 }, 00:12:22.776 "dhchap_digests": [ 00:12:22.776 "sha256", 00:12:22.776 "sha384", 00:12:22.776 "sha512" 00:12:22.776 ], 00:12:22.776 "dhchap_dhgroups": [ 00:12:22.776 "null", 00:12:22.776 "ffdhe2048", 00:12:22.776 "ffdhe3072", 00:12:22.776 "ffdhe4096", 00:12:22.776 "ffdhe6144", 00:12:22.776 "ffdhe8192" 00:12:22.776 ] 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "nvmf_set_max_subsystems", 00:12:22.776 "params": { 00:12:22.776 "max_subsystems": 1024 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "nvmf_set_crdt", 00:12:22.776 "params": { 00:12:22.776 "crdt1": 0, 00:12:22.776 "crdt2": 0, 00:12:22.776 "crdt3": 0 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "nvmf_create_transport", 00:12:22.776 "params": { 00:12:22.776 "trtype": "TCP", 00:12:22.776 "max_queue_depth": 128, 00:12:22.776 "max_io_qpairs_per_ctrlr": 127, 00:12:22.776 "in_capsule_data_size": 4096, 00:12:22.776 "max_io_size": 131072, 00:12:22.776 "io_unit_size": 131072, 00:12:22.776 "max_aq_depth": 128, 00:12:22.776 "num_shared_buffers": 511, 00:12:22.776 "buf_cache_size": 4294967295, 00:12:22.776 "dif_insert_or_strip": false, 00:12:22.776 "zcopy": false, 00:12:22.776 "c2h_success": false, 00:12:22.776 "sock_priority": 0, 00:12:22.776 "abort_timeout_sec": 1, 00:12:22.776 "ack_timeout": 0, 00:12:22.776 "data_wr_pool_size": 0 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "nvmf_create_subsystem", 00:12:22.776 "params": { 00:12:22.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.776 "allow_any_host": false, 00:12:22.776 "serial_number": "SPDK00000000000001", 00:12:22.776 "model_number": "SPDK bdev Controller", 00:12:22.776 "max_namespaces": 10, 00:12:22.776 "min_cntlid": 1, 00:12:22.776 "max_cntlid": 65519, 00:12:22.776 "ana_reporting": false 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "nvmf_subsystem_add_host", 00:12:22.776 "params": { 00:12:22.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.776 "host": "nqn.2016-06.io.spdk:host1", 00:12:22.776 "psk": "key0" 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "nvmf_subsystem_add_ns", 00:12:22.776 "params": { 00:12:22.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.776 "namespace": { 00:12:22.776 "nsid": 1, 00:12:22.776 "bdev_name": "malloc0", 00:12:22.776 "nguid": "327B7D9FFEA149B09289B2E4074DE179", 00:12:22.776 "uuid": "327b7d9f-fea1-49b0-9289-b2e4074de179", 00:12:22.776 "no_auto_visible": false 00:12:22.776 } 00:12:22.776 } 00:12:22.776 }, 00:12:22.776 { 00:12:22.776 "method": "nvmf_subsystem_add_listener", 00:12:22.776 "params": { 00:12:22.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.776 "listen_address": { 00:12:22.776 "trtype": "TCP", 00:12:22.776 "adrfam": "IPv4", 00:12:22.776 "traddr": "10.0.0.3", 00:12:22.776 "trsvcid": "4420" 00:12:22.776 }, 00:12:22.776 "secure_channel": true 00:12:22.776 } 00:12:22.776 } 00:12:22.776 ] 00:12:22.776 } 00:12:22.776 ] 00:12:22.776 }' 00:12:22.776 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:23.039 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:12:23.039 "subsystems": [ 00:12:23.039 { 00:12:23.039 "subsystem": "keyring", 00:12:23.039 "config": [ 00:12:23.039 { 00:12:23.039 "method": "keyring_file_add_key", 00:12:23.039 "params": { 00:12:23.039 "name": "key0", 00:12:23.039 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:23.039 } 00:12:23.039 } 00:12:23.039 ] 00:12:23.039 }, 00:12:23.039 { 00:12:23.039 "subsystem": "iobuf", 00:12:23.039 "config": [ 00:12:23.039 { 00:12:23.039 "method": "iobuf_set_options", 00:12:23.039 "params": { 00:12:23.039 "small_pool_count": 8192, 00:12:23.039 "large_pool_count": 1024, 00:12:23.039 "small_bufsize": 8192, 00:12:23.039 "large_bufsize": 135168, 00:12:23.039 "enable_numa": false 00:12:23.039 } 00:12:23.039 } 00:12:23.039 ] 00:12:23.039 }, 00:12:23.039 { 00:12:23.039 "subsystem": "sock", 00:12:23.039 "config": [ 00:12:23.039 { 00:12:23.039 "method": "sock_set_default_impl", 00:12:23.039 "params": { 00:12:23.039 "impl_name": "uring" 00:12:23.039 } 00:12:23.039 }, 00:12:23.039 { 00:12:23.039 "method": "sock_impl_set_options", 00:12:23.040 "params": { 00:12:23.040 "impl_name": "ssl", 00:12:23.040 "recv_buf_size": 4096, 00:12:23.040 "send_buf_size": 4096, 00:12:23.040 "enable_recv_pipe": true, 00:12:23.040 "enable_quickack": false, 00:12:23.040 "enable_placement_id": 0, 00:12:23.040 "enable_zerocopy_send_server": true, 00:12:23.040 "enable_zerocopy_send_client": false, 00:12:23.040 "zerocopy_threshold": 0, 00:12:23.040 "tls_version": 0, 00:12:23.040 "enable_ktls": false 00:12:23.040 } 00:12:23.040 }, 00:12:23.040 { 00:12:23.040 "method": "sock_impl_set_options", 00:12:23.040 "params": { 00:12:23.040 "impl_name": "posix", 00:12:23.040 "recv_buf_size": 2097152, 00:12:23.040 "send_buf_size": 2097152, 00:12:23.040 "enable_recv_pipe": true, 00:12:23.040 "enable_quickack": false, 00:12:23.040 "enable_placement_id": 0, 00:12:23.040 "enable_zerocopy_send_server": true, 00:12:23.040 "enable_zerocopy_send_client": false, 00:12:23.040 "zerocopy_threshold": 0, 00:12:23.040 "tls_version": 0, 00:12:23.040 "enable_ktls": false 00:12:23.040 } 00:12:23.040 }, 00:12:23.040 { 00:12:23.040 "method": "sock_impl_set_options", 00:12:23.040 "params": { 00:12:23.040 "impl_name": "uring", 00:12:23.040 "recv_buf_size": 2097152, 00:12:23.040 "send_buf_size": 2097152, 00:12:23.040 "enable_recv_pipe": true, 00:12:23.040 "enable_quickack": false, 00:12:23.040 "enable_placement_id": 0, 00:12:23.040 "enable_zerocopy_send_server": false, 00:12:23.040 "enable_zerocopy_send_client": false, 00:12:23.040 "zerocopy_threshold": 0, 00:12:23.040 "tls_version": 0, 00:12:23.040 "enable_ktls": false 00:12:23.040 } 00:12:23.040 } 00:12:23.040 ] 00:12:23.040 }, 00:12:23.040 { 00:12:23.040 "subsystem": "vmd", 00:12:23.040 "config": [] 00:12:23.040 }, 00:12:23.040 { 00:12:23.040 "subsystem": "accel", 00:12:23.040 "config": [ 00:12:23.040 { 00:12:23.040 "method": "accel_set_options", 00:12:23.040 "params": { 00:12:23.040 "small_cache_size": 128, 00:12:23.040 "large_cache_size": 16, 00:12:23.040 "task_count": 2048, 00:12:23.040 "sequence_count": 2048, 00:12:23.040 "buf_count": 2048 00:12:23.040 } 00:12:23.040 } 00:12:23.040 ] 00:12:23.040 }, 00:12:23.040 { 00:12:23.040 "subsystem": "bdev", 00:12:23.040 "config": [ 00:12:23.040 { 00:12:23.040 "method": "bdev_set_options", 00:12:23.040 "params": { 00:12:23.040 "bdev_io_pool_size": 65535, 00:12:23.040 "bdev_io_cache_size": 256, 00:12:23.040 "bdev_auto_examine": true, 00:12:23.040 "iobuf_small_cache_size": 128, 00:12:23.040 "iobuf_large_cache_size": 16 00:12:23.040 } 00:12:23.040 }, 00:12:23.040 { 00:12:23.041 "method": "bdev_raid_set_options", 00:12:23.041 "params": { 00:12:23.041 "process_window_size_kb": 1024, 00:12:23.041 "process_max_bandwidth_mb_sec": 0 00:12:23.041 } 00:12:23.041 }, 00:12:23.041 { 00:12:23.041 "method": "bdev_iscsi_set_options", 00:12:23.041 "params": { 00:12:23.041 "timeout_sec": 30 00:12:23.041 } 00:12:23.041 }, 00:12:23.041 { 00:12:23.041 "method": "bdev_nvme_set_options", 00:12:23.041 "params": { 00:12:23.041 "action_on_timeout": "none", 00:12:23.041 "timeout_us": 0, 00:12:23.041 "timeout_admin_us": 0, 00:12:23.041 "keep_alive_timeout_ms": 10000, 00:12:23.041 "arbitration_burst": 0, 00:12:23.041 "low_priority_weight": 0, 00:12:23.041 "medium_priority_weight": 0, 00:12:23.041 "high_priority_weight": 0, 00:12:23.041 "nvme_adminq_poll_period_us": 10000, 00:12:23.041 "nvme_ioq_poll_period_us": 0, 00:12:23.041 "io_queue_requests": 512, 00:12:23.041 "delay_cmd_submit": true, 00:12:23.041 "transport_retry_count": 4, 00:12:23.041 "bdev_retry_count": 3, 00:12:23.041 "transport_ack_timeout": 0, 00:12:23.041 "ctrlr_loss_timeout_sec": 0, 00:12:23.041 "reconnect_delay_sec": 0, 00:12:23.041 "fast_io_fail_timeout_sec": 0, 00:12:23.041 "disable_auto_failback": false, 00:12:23.041 "generate_uuids": false, 00:12:23.041 "transport_tos": 0, 00:12:23.041 "nvme_error_stat": false, 00:12:23.041 "rdma_srq_size": 0, 00:12:23.041 "io_path_stat": false, 00:12:23.041 "allow_accel_sequence": false, 00:12:23.041 "rdma_max_cq_size": 0, 00:12:23.041 "rdma_cm_event_timeout_ms": 0, 00:12:23.041 "dhchap_digests": [ 00:12:23.041 "sha256", 00:12:23.041 "sha384", 00:12:23.041 "sha512" 00:12:23.041 ], 00:12:23.041 "dhchap_dhgroups": [ 00:12:23.041 "null", 00:12:23.041 "ffdhe2048", 00:12:23.041 "ffdhe3072", 00:12:23.041 "ffdhe4096", 00:12:23.041 "ffdhe6144", 00:12:23.041 "ffdhe8192" 00:12:23.041 ] 00:12:23.041 } 00:12:23.041 }, 00:12:23.041 { 00:12:23.041 "method": "bdev_nvme_attach_controller", 00:12:23.041 "params": { 00:12:23.041 "name": "TLSTEST", 00:12:23.041 "trtype": "TCP", 00:12:23.041 "adrfam": "IPv4", 00:12:23.041 "traddr": "10.0.0.3", 00:12:23.041 "trsvcid": "4420", 00:12:23.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.041 "prchk_reftag": false, 00:12:23.041 "prchk_guard": false, 00:12:23.041 "ctrlr_loss_timeout_sec": 0, 00:12:23.041 "reconnect_delay_sec": 0, 00:12:23.041 "fast_io_fail_timeout_sec": 0, 00:12:23.041 "psk": "key0", 00:12:23.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.041 "hdgst": false, 00:12:23.041 "ddgst": false, 00:12:23.041 "multipath": "multipath" 00:12:23.041 } 00:12:23.041 }, 00:12:23.041 { 00:12:23.041 "method": "bdev_nvme_set_hotplug", 00:12:23.041 "params": { 00:12:23.041 "period_us": 100000, 00:12:23.041 "enable": false 00:12:23.041 } 00:12:23.041 }, 00:12:23.041 { 00:12:23.041 "method": "bdev_wait_for_examine" 00:12:23.041 } 00:12:23.041 ] 00:12:23.041 }, 00:12:23.041 { 00:12:23.041 "subsystem": "nbd", 00:12:23.041 "config": [] 00:12:23.041 } 00:12:23.041 ] 00:12:23.041 }' 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71050 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71050 ']' 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71050 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71050 00:12:23.041 killing process with pid 71050 00:12:23.041 Received shutdown signal, test time was about 10.000000 seconds 00:12:23.041 00:12:23.041 Latency(us) 00:12:23.041 [2024-11-26T20:35:37.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.041 [2024-11-26T20:35:37.596Z] =================================================================================================================== 00:12:23.041 [2024-11-26T20:35:37.596Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71050' 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71050 00:12:23.041 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71050 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 70994 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70994 ']' 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70994 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70994 00:12:23.302 killing process with pid 70994 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70994' 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70994 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70994 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:23.302 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:12:23.302 "subsystems": [ 00:12:23.302 { 00:12:23.302 "subsystem": "keyring", 00:12:23.302 "config": [ 00:12:23.302 { 00:12:23.302 "method": "keyring_file_add_key", 00:12:23.302 "params": { 00:12:23.302 "name": "key0", 00:12:23.302 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:23.302 } 00:12:23.302 } 00:12:23.302 ] 00:12:23.302 }, 00:12:23.302 { 00:12:23.302 "subsystem": "iobuf", 00:12:23.302 "config": [ 00:12:23.302 { 00:12:23.302 "method": "iobuf_set_options", 00:12:23.302 "params": { 00:12:23.302 "small_pool_count": 8192, 00:12:23.302 "large_pool_count": 1024, 00:12:23.302 "small_bufsize": 8192, 00:12:23.302 "large_bufsize": 135168, 00:12:23.302 "enable_numa": false 00:12:23.302 } 00:12:23.302 } 00:12:23.302 ] 00:12:23.302 }, 00:12:23.302 { 00:12:23.302 "subsystem": "sock", 00:12:23.302 "config": [ 00:12:23.302 { 00:12:23.302 "method": "sock_set_default_impl", 00:12:23.302 "params": { 00:12:23.302 "impl_name": "uring" 00:12:23.302 } 00:12:23.302 }, 00:12:23.302 { 00:12:23.302 "method": "sock_impl_set_options", 00:12:23.302 "params": { 00:12:23.302 "impl_name": "ssl", 00:12:23.302 "recv_buf_size": 4096, 00:12:23.302 "send_buf_size": 4096, 00:12:23.302 "enable_recv_pipe": true, 00:12:23.302 "enable_quickack": false, 00:12:23.302 "enable_placement_id": 0, 00:12:23.302 "enable_zerocopy_send_server": true, 00:12:23.302 "enable_zerocopy_send_client": false, 00:12:23.302 "zerocopy_threshold": 0, 00:12:23.302 "tls_version": 0, 00:12:23.302 "enable_ktls": false 00:12:23.302 } 00:12:23.302 }, 00:12:23.302 { 00:12:23.302 "method": "sock_impl_set_options", 00:12:23.302 "params": { 00:12:23.302 "impl_name": "posix", 00:12:23.302 "recv_buf_size": 2097152, 00:12:23.302 "send_buf_size": 2097152, 00:12:23.302 "enable_recv_pipe": true, 00:12:23.303 "enable_quickack": false, 00:12:23.303 "enable_placement_id": 0, 00:12:23.303 "enable_zerocopy_send_server": true, 00:12:23.303 "enable_zerocopy_send_client": false, 00:12:23.303 "zerocopy_threshold": 0, 00:12:23.303 "tls_version": 0, 00:12:23.303 "enable_ktls": false 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "sock_impl_set_options", 00:12:23.303 "params": { 00:12:23.303 "impl_name": "uring", 00:12:23.303 "recv_buf_size": 2097152, 00:12:23.303 "send_buf_size": 2097152, 00:12:23.303 "enable_recv_pipe": true, 00:12:23.303 "enable_quickack": false, 00:12:23.303 "enable_placement_id": 0, 00:12:23.303 "enable_zerocopy_send_server": false, 00:12:23.303 "enable_zerocopy_send_client": false, 00:12:23.303 "zerocopy_threshold": 0, 00:12:23.303 "tls_version": 0, 00:12:23.303 "enable_ktls": false 00:12:23.303 } 00:12:23.303 } 00:12:23.303 ] 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "subsystem": "vmd", 00:12:23.303 "config": [] 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "subsystem": "accel", 00:12:23.303 "config": [ 00:12:23.303 { 00:12:23.303 "method": "accel_set_options", 00:12:23.303 "params": { 00:12:23.303 "small_cache_size": 128, 00:12:23.303 "large_cache_size": 16, 00:12:23.303 "task_count": 2048, 00:12:23.303 "sequence_count": 2048, 00:12:23.303 "buf_count": 2048 00:12:23.303 } 00:12:23.303 } 00:12:23.303 ] 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "subsystem": "bdev", 00:12:23.303 "config": [ 00:12:23.303 { 00:12:23.303 "method": "bdev_set_options", 00:12:23.303 "params": { 00:12:23.303 "bdev_io_pool_size": 65535, 00:12:23.303 "bdev_io_cache_size": 256, 00:12:23.303 "bdev_auto_examine": true, 00:12:23.303 "iobuf_small_cache_size": 128, 00:12:23.303 "iobuf_large_cache_size": 16 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "bdev_raid_set_options", 00:12:23.303 "params": { 00:12:23.303 "process_window_size_kb": 1024, 00:12:23.303 "process_max_bandwidth_mb_sec": 0 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "bdev_iscsi_set_options", 00:12:23.303 "params": { 00:12:23.303 "timeout_sec": 30 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "bdev_nvme_set_options", 00:12:23.303 "params": { 00:12:23.303 "action_on_timeout": "none", 00:12:23.303 "timeout_us": 0, 00:12:23.303 "timeout_admin_us": 0, 00:12:23.303 "keep_alive_timeout_ms": 10000, 00:12:23.303 "arbitration_burst": 0, 00:12:23.303 "low_priority_weight": 0, 00:12:23.303 "medium_priority_weight": 0, 00:12:23.303 "high_priority_weight": 0, 00:12:23.303 "nvme_adminq_poll_period_us": 10000, 00:12:23.303 "nvme_ioq_poll_period_us": 0, 00:12:23.303 "io_queue_requests": 0, 00:12:23.303 "delay_cmd_submit": true, 00:12:23.303 "transport_retry_count": 4, 00:12:23.303 "bdev_retry_count": 3, 00:12:23.303 "transport_ack_timeout": 0, 00:12:23.303 "ctrlr_loss_timeout_sec": 0, 00:12:23.303 "reconnect_delay_sec": 0, 00:12:23.303 "fast_io_fail_timeout_sec": 0, 00:12:23.303 "disable_auto_failback": false, 00:12:23.303 "generate_uuids": false, 00:12:23.303 "transport_tos": 0, 00:12:23.303 "nvme_error_stat": false, 00:12:23.303 "rdma_srq_size": 0, 00:12:23.303 "io_path_stat": false, 00:12:23.303 "allow_accel_sequence": false, 00:12:23.303 "rdma_max_cq_size": 0, 00:12:23.303 "rdma_cm_event_timeout_ms": 0, 00:12:23.303 "dhchap_digests": [ 00:12:23.303 "sha256", 00:12:23.303 "sha384", 00:12:23.303 "sha512" 00:12:23.303 ], 00:12:23.303 "dhchap_dhgroups": [ 00:12:23.303 "null", 00:12:23.303 "ffdhe2048", 00:12:23.303 "ffdhe3072", 00:12:23.303 "ffdhe4096", 00:12:23.303 "ffdhe6144", 00:12:23.303 "ffdhe8192" 00:12:23.303 ] 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "bdev_nvme_set_hotplug", 00:12:23.303 "params": { 00:12:23.303 "period_us": 100000, 00:12:23.303 "enable": false 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "bdev_malloc_create", 00:12:23.303 "params": { 00:12:23.303 "name": "malloc0", 00:12:23.303 "num_blocks": 8192, 00:12:23.303 "block_size": 4096, 00:12:23.303 "physical_block_size": 4096, 00:12:23.303 "uuid": "327b7d9f-fea1-49b0-9289-b2e4074de179", 00:12:23.303 "optimal_io_boundary": 0, 00:12:23.303 "md_size": 0, 00:12:23.303 "dif_type": 0, 00:12:23.303 "dif_is_head_of_md": false, 00:12:23.303 "dif_pi_format": 0 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "bdev_wait_for_examine" 00:12:23.303 } 00:12:23.303 ] 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "subsystem": "nbd", 00:12:23.303 "config": [] 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "subsystem": "scheduler", 00:12:23.303 "config": [ 00:12:23.303 { 00:12:23.303 "method": "framework_set_scheduler", 00:12:23.303 "params": { 00:12:23.303 "name": "static" 00:12:23.303 } 00:12:23.303 } 00:12:23.303 ] 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "subsystem": "nvmf", 00:12:23.303 "config": [ 00:12:23.303 { 00:12:23.303 "method": "nvmf_set_config", 00:12:23.303 "params": { 00:12:23.303 "discovery_filter": "match_any", 00:12:23.303 "admin_cmd_passthru": { 00:12:23.303 "identify_ctrlr": false 00:12:23.303 }, 00:12:23.303 "dhchap_digests": [ 00:12:23.303 "sha256", 00:12:23.303 "sha384", 00:12:23.303 "sha512" 00:12:23.303 ], 00:12:23.303 "dhchap_dhgroups": [ 00:12:23.303 "null", 00:12:23.303 "ffdhe2048", 00:12:23.303 "ffdhe3072", 00:12:23.303 "ffdhe4096", 00:12:23.303 "ffdhe6144", 00:12:23.303 "ffdhe8192" 00:12:23.303 ] 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "nvmf_set_max_subsystems", 00:12:23.303 "params": { 00:12:23.303 "max_subsystems": 1024 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "nvmf_set_crdt", 00:12:23.303 "params": { 00:12:23.303 "crdt1": 0, 00:12:23.303 "crdt2": 0, 00:12:23.303 "crdt3": 0 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "nvmf_create_transport", 00:12:23.303 "params": { 00:12:23.303 "trtype": "TCP", 00:12:23.303 "max_queue_depth": 128, 00:12:23.303 "max_io_qpairs_per_ctrlr": 127, 00:12:23.303 "in_capsule_data_size": 4096, 00:12:23.303 "max_io_size": 131072, 00:12:23.303 "io_unit_size": 131072, 00:12:23.303 "max_aq_depth": 128, 00:12:23.303 "num_shared_buffers": 511, 00:12:23.303 "buf_cache_size": 4294967295, 00:12:23.303 "dif_insert_or_strip": false, 00:12:23.303 "zcopy": false, 00:12:23.303 "c2h_success": false, 00:12:23.303 "sock_priority": 0, 00:12:23.303 "abort_timeout_sec": 1, 00:12:23.303 "ack_timeout": 0, 00:12:23.303 "data_wr_pool_size": 0 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "nvmf_create_subsystem", 00:12:23.303 "params": { 00:12:23.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.303 "allow_any_host": false, 00:12:23.303 "serial_number": "SPDK00000000000001", 00:12:23.303 "model_number": "SPDK bdev Controller", 00:12:23.303 "max_namespaces": 10, 00:12:23.303 "min_cntlid": 1, 00:12:23.303 "max_cntlid": 65519, 00:12:23.303 "ana_reporting": false 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "nvmf_subsystem_add_host", 00:12:23.303 "params": { 00:12:23.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.303 "host": "nqn.2016-06.io.spdk:host1", 00:12:23.303 "psk": "key0" 00:12:23.303 } 00:12:23.303 }, 00:12:23.303 { 00:12:23.303 "method": "nvmf_subsystem_add_ns", 00:12:23.303 "params": { 00:12:23.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.304 "namespace": { 00:12:23.304 "nsid": 1, 00:12:23.304 "bdev_name": "malloc0", 00:12:23.304 "nguid": "327B7D9FFEA149B09289B2E4074DE179", 00:12:23.304 "uuid": "327b7d9f-fea1-49b0-9289-b2e4074de179", 00:12:23.304 "no_auto_visible": false 00:12:23.304 } 00:12:23.304 } 00:12:23.304 }, 00:12:23.304 { 00:12:23.304 "method": "nvmf_subsystem_add_listener", 00:12:23.304 "params": { 00:12:23.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.304 "listen_address": { 00:12:23.304 "trtype": "TCP", 00:12:23.304 "adrfam": "IPv4", 00:12:23.304 "traddr": "10.0.0.3", 00:12:23.304 "trsvcid": "4420" 00:12:23.304 }, 00:12:23.304 "secure_channel": true 00:12:23.304 } 00:12:23.304 } 00:12:23.304 ] 00:12:23.304 } 00:12:23.304 ] 00:12:23.304 }' 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71094 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71094 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71094 ']' 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.304 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:23.562 [2024-11-26 20:35:37.862554] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:23.562 [2024-11-26 20:35:37.862627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.562 [2024-11-26 20:35:37.989563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.562 [2024-11-26 20:35:38.022349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.562 [2024-11-26 20:35:38.022540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.562 [2024-11-26 20:35:38.022684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.562 [2024-11-26 20:35:38.022746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.562 [2024-11-26 20:35:38.022761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.562 [2024-11-26 20:35:38.023060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.820 [2024-11-26 20:35:38.167332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:23.820 [2024-11-26 20:35:38.231310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.820 [2024-11-26 20:35:38.263251] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:23.820 [2024-11-26 20:35:38.263535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71126 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71126 /var/tmp/bdevperf.sock 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71126 ']' 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:24.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:24.385 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:12:24.385 "subsystems": [ 00:12:24.385 { 00:12:24.385 "subsystem": "keyring", 00:12:24.385 "config": [ 00:12:24.385 { 00:12:24.385 "method": "keyring_file_add_key", 00:12:24.385 "params": { 00:12:24.385 "name": "key0", 00:12:24.385 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:24.385 } 00:12:24.385 } 00:12:24.385 ] 00:12:24.385 }, 00:12:24.385 { 00:12:24.385 "subsystem": "iobuf", 00:12:24.385 "config": [ 00:12:24.385 { 00:12:24.385 "method": "iobuf_set_options", 00:12:24.385 "params": { 00:12:24.385 "small_pool_count": 8192, 00:12:24.385 "large_pool_count": 1024, 00:12:24.385 "small_bufsize": 8192, 00:12:24.385 "large_bufsize": 135168, 00:12:24.385 "enable_numa": false 00:12:24.385 } 00:12:24.385 } 00:12:24.385 ] 00:12:24.385 }, 00:12:24.385 { 00:12:24.385 "subsystem": "sock", 00:12:24.385 "config": [ 00:12:24.385 { 00:12:24.385 "method": "sock_set_default_impl", 00:12:24.385 "params": { 00:12:24.385 "impl_name": "uring" 00:12:24.385 } 00:12:24.385 }, 00:12:24.385 { 00:12:24.385 "method": "sock_impl_set_options", 00:12:24.385 "params": { 00:12:24.385 "impl_name": "ssl", 00:12:24.385 "recv_buf_size": 4096, 00:12:24.385 "send_buf_size": 4096, 00:12:24.385 "enable_recv_pipe": true, 00:12:24.385 "enable_quickack": false, 00:12:24.385 "enable_placement_id": 0, 00:12:24.385 "enable_zerocopy_send_server": true, 00:12:24.385 "enable_zerocopy_send_client": false, 00:12:24.385 "zerocopy_threshold": 0, 00:12:24.385 "tls_version": 0, 00:12:24.385 "enable_ktls": false 00:12:24.385 } 00:12:24.385 }, 00:12:24.385 { 00:12:24.385 "method": "sock_impl_set_options", 00:12:24.385 "params": { 00:12:24.385 "impl_name": "posix", 00:12:24.385 "recv_buf_size": 2097152, 00:12:24.385 "send_buf_size": 2097152, 00:12:24.386 "enable_recv_pipe": true, 00:12:24.386 "enable_quickack": false, 00:12:24.386 "enable_placement_id": 0, 00:12:24.386 "enable_zerocopy_send_server": true, 00:12:24.386 "enable_zerocopy_send_client": false, 00:12:24.386 "zerocopy_threshold": 0, 00:12:24.386 "tls_version": 0, 00:12:24.386 "enable_ktls": false 00:12:24.386 } 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "method": "sock_impl_set_options", 00:12:24.386 "params": { 00:12:24.386 "impl_name": "uring", 00:12:24.386 "recv_buf_size": 2097152, 00:12:24.386 "send_buf_size": 2097152, 00:12:24.386 "enable_recv_pipe": true, 00:12:24.386 "enable_quickack": false, 00:12:24.386 "enable_placement_id": 0, 00:12:24.386 "enable_zerocopy_send_server": false, 00:12:24.386 "enable_zerocopy_send_client": false, 00:12:24.386 "zerocopy_threshold": 0, 00:12:24.386 "tls_version": 0, 00:12:24.386 "enable_ktls": false 00:12:24.386 } 00:12:24.386 } 00:12:24.386 ] 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "subsystem": "vmd", 00:12:24.386 "config": [] 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "subsystem": "accel", 00:12:24.386 "config": [ 00:12:24.386 { 00:12:24.386 "method": "accel_set_options", 00:12:24.386 "params": { 00:12:24.386 "small_cache_size": 128, 00:12:24.386 "large_cache_size": 16, 00:12:24.386 "task_count": 2048, 00:12:24.386 "sequence_count": 2048, 00:12:24.386 "buf_count": 2048 00:12:24.386 } 00:12:24.386 } 00:12:24.386 ] 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "subsystem": "bdev", 00:12:24.386 "config": [ 00:12:24.386 { 00:12:24.386 "method": "bdev_set_options", 00:12:24.386 "params": { 00:12:24.386 "bdev_io_pool_size": 65535, 00:12:24.386 "bdev_io_cache_size": 256, 00:12:24.386 "bdev_auto_examine": true, 00:12:24.386 "iobuf_small_cache_size": 128, 00:12:24.386 "iobuf_large_cache_size": 16 00:12:24.386 } 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "method": "bdev_raid_set_options", 00:12:24.386 "params": { 00:12:24.386 "process_window_size_kb": 1024, 00:12:24.386 "process_max_bandwidth_mb_sec": 0 00:12:24.386 } 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "method": "bdev_iscsi_set_options", 00:12:24.386 "params": { 00:12:24.386 "timeout_sec": 30 00:12:24.386 } 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "method": "bdev_nvme_set_options", 00:12:24.386 "params": { 00:12:24.386 "action_on_timeout": "none", 00:12:24.386 "timeout_us": 0, 00:12:24.386 "timeout_admin_us": 0, 00:12:24.386 "keep_alive_timeout_ms": 10000, 00:12:24.386 "arbitration_burst": 0, 00:12:24.386 "low_priority_weight": 0, 00:12:24.386 "medium_priority_weight": 0, 00:12:24.386 "high_priority_weight": 0, 00:12:24.386 "nvme_adminq_poll_period_us": 10000, 00:12:24.386 "nvme_ioq_poll_period_us": 0, 00:12:24.386 "io_queue_requests": 512, 00:12:24.386 "delay_cmd_submit": true, 00:12:24.386 "transport_retry_count": 4, 00:12:24.386 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.386 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:24.386 "bdev_retry_count": 3, 00:12:24.386 "transport_ack_timeout": 0, 00:12:24.386 "ctrlr_loss_timeout_sec": 0, 00:12:24.386 "reconnect_delay_sec": 0, 00:12:24.386 "fast_io_fail_timeout_sec": 0, 00:12:24.386 "disable_auto_failback": false, 00:12:24.386 "generate_uuids": false, 00:12:24.386 "transport_tos": 0, 00:12:24.386 "nvme_error_stat": false, 00:12:24.386 "rdma_srq_size": 0, 00:12:24.386 "io_path_stat": false, 00:12:24.386 "allow_accel_sequence": false, 00:12:24.386 "rdma_max_cq_size": 0, 00:12:24.386 "rdma_cm_event_timeout_ms": 0, 00:12:24.386 "dhchap_digests": [ 00:12:24.386 "sha256", 00:12:24.386 "sha384", 00:12:24.386 "sha512" 00:12:24.386 ], 00:12:24.386 "dhchap_dhgroups": [ 00:12:24.386 "null", 00:12:24.386 "ffdhe2048", 00:12:24.386 "ffdhe3072", 00:12:24.386 "ffdhe4096", 00:12:24.386 "ffdhe6144", 00:12:24.386 "ffdhe8192" 00:12:24.386 ] 00:12:24.386 } 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "method": "bdev_nvme_attach_controller", 00:12:24.386 "params": { 00:12:24.386 "name": "TLSTEST", 00:12:24.386 "trtype": "TCP", 00:12:24.386 "adrfam": "IPv4", 00:12:24.386 "traddr": "10.0.0.3", 00:12:24.386 "trsvcid": "4420", 00:12:24.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.386 "prchk_reftag": false, 00:12:24.386 "prchk_guard": false, 00:12:24.386 "ctrlr_loss_timeout_sec": 0, 00:12:24.386 "reconnect_delay_sec": 0, 00:12:24.386 "fast_io_fail_timeout_sec": 0, 00:12:24.386 "psk": "key0", 00:12:24.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:24.386 "hdgst": false, 00:12:24.386 "ddgst": false, 00:12:24.386 "multipath": "multipath" 00:12:24.386 } 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "method": "bdev_nvme_set_hotplug", 00:12:24.386 "params": { 00:12:24.386 "period_us": 100000, 00:12:24.386 "enable": false 00:12:24.386 } 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "method": "bdev_wait_for_examine" 00:12:24.386 } 00:12:24.386 ] 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "subsystem": "nbd", 00:12:24.386 "config": [] 00:12:24.386 } 00:12:24.386 ] 00:12:24.386 }' 00:12:24.386 [2024-11-26 20:35:38.771478] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:24.386 [2024-11-26 20:35:38.771543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71126 ] 00:12:24.386 [2024-11-26 20:35:38.905198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.644 [2024-11-26 20:35:38.939537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.644 [2024-11-26 20:35:39.050850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:24.644 [2024-11-26 20:35:39.088320] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:25.210 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.210 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:25.210 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:25.211 Running I/O for 10 seconds... 00:12:27.517 6880.00 IOPS, 26.88 MiB/s [2024-11-26T20:35:43.032Z] 6940.50 IOPS, 27.11 MiB/s [2024-11-26T20:35:43.969Z] 6962.00 IOPS, 27.20 MiB/s [2024-11-26T20:35:44.904Z] 6973.50 IOPS, 27.24 MiB/s [2024-11-26T20:35:45.839Z] 6870.40 IOPS, 26.84 MiB/s [2024-11-26T20:35:46.772Z] 6687.50 IOPS, 26.12 MiB/s [2024-11-26T20:35:47.775Z] 6553.57 IOPS, 25.60 MiB/s [2024-11-26T20:35:49.148Z] 6453.00 IOPS, 25.21 MiB/s [2024-11-26T20:35:50.090Z] 6461.22 IOPS, 25.24 MiB/s [2024-11-26T20:35:50.090Z] 6481.70 IOPS, 25.32 MiB/s 00:12:35.535 Latency(us) 00:12:35.535 [2024-11-26T20:35:50.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.535 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:35.535 Verification LBA range: start 0x0 length 0x2000 00:12:35.535 TLSTESTn1 : 10.01 6487.22 25.34 0.00 0.00 19701.17 3680.10 20164.92 00:12:35.535 [2024-11-26T20:35:50.090Z] =================================================================================================================== 00:12:35.535 [2024-11-26T20:35:50.090Z] Total : 6487.22 25.34 0.00 0.00 19701.17 3680.10 20164.92 00:12:35.535 { 00:12:35.535 "results": [ 00:12:35.535 { 00:12:35.535 "job": "TLSTESTn1", 00:12:35.535 "core_mask": "0x4", 00:12:35.535 "workload": "verify", 00:12:35.535 "status": "finished", 00:12:35.535 "verify_range": { 00:12:35.535 "start": 0, 00:12:35.535 "length": 8192 00:12:35.535 }, 00:12:35.535 "queue_depth": 128, 00:12:35.535 "io_size": 4096, 00:12:35.535 "runtime": 10.010456, 00:12:35.535 "iops": 6487.216965940413, 00:12:35.535 "mibps": 25.340691273204737, 00:12:35.535 "io_failed": 0, 00:12:35.535 "io_timeout": 0, 00:12:35.535 "avg_latency_us": 19701.172542133565, 00:12:35.535 "min_latency_us": 3680.0984615384614, 00:12:35.535 "max_latency_us": 20164.923076923078 00:12:35.535 } 00:12:35.535 ], 00:12:35.535 "core_count": 1 00:12:35.535 } 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71126 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71126 ']' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71126 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71126 00:12:35.535 killing process with pid 71126 00:12:35.535 Received shutdown signal, test time was about 10.000000 seconds 00:12:35.535 00:12:35.535 Latency(us) 00:12:35.535 [2024-11-26T20:35:50.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.535 [2024-11-26T20:35:50.090Z] =================================================================================================================== 00:12:35.535 [2024-11-26T20:35:50.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71126' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71126 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71126 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71094 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71094 ']' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71094 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71094 00:12:35.535 killing process with pid 71094 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71094' 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71094 00:12:35.535 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71094 00:12:35.535 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:12:35.535 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.535 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.535 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:35.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71259 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71259 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71259 ']' 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.536 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:35.793 [2024-11-26 20:35:50.117326] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:35.793 [2024-11-26 20:35:50.117620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.793 [2024-11-26 20:35:50.261381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.793 [2024-11-26 20:35:50.300639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.793 [2024-11-26 20:35:50.300700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.793 [2024-11-26 20:35:50.300707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.793 [2024-11-26 20:35:50.300712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.793 [2024-11-26 20:35:50.300716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.793 [2024-11-26 20:35:50.301055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.793 [2024-11-26 20:35:50.335842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.JJQPdU3TEO 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JJQPdU3TEO 00:12:36.727 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:36.727 [2024-11-26 20:35:51.162603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.727 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:36.985 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:37.243 [2024-11-26 20:35:51.590676] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:37.243 [2024-11-26 20:35:51.590854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:37.243 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:37.500 malloc0 00:12:37.500 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:37.500 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:37.757 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:38.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=71314 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 71314 /var/tmp/bdevperf.sock 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71314 ']' 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.015 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.015 [2024-11-26 20:35:52.532430] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:38.015 [2024-11-26 20:35:52.532690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71314 ] 00:12:38.273 [2024-11-26 20:35:52.672632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.273 [2024-11-26 20:35:52.717769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.273 [2024-11-26 20:35:52.760169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:38.912 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.912 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:38.912 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:39.172 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:39.430 [2024-11-26 20:35:53.819045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:39.430 nvme0n1 00:12:39.430 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:39.688 Running I/O for 1 seconds... 00:12:40.626 6095.00 IOPS, 23.81 MiB/s 00:12:40.626 Latency(us) 00:12:40.626 [2024-11-26T20:35:55.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.626 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:40.626 Verification LBA range: start 0x0 length 0x2000 00:12:40.626 nvme0n1 : 1.01 6164.62 24.08 0.00 0.00 20653.48 2596.23 16434.41 00:12:40.626 [2024-11-26T20:35:55.181Z] =================================================================================================================== 00:12:40.626 [2024-11-26T20:35:55.181Z] Total : 6164.62 24.08 0.00 0.00 20653.48 2596.23 16434.41 00:12:40.626 { 00:12:40.626 "results": [ 00:12:40.626 { 00:12:40.626 "job": "nvme0n1", 00:12:40.626 "core_mask": "0x2", 00:12:40.626 "workload": "verify", 00:12:40.626 "status": "finished", 00:12:40.626 "verify_range": { 00:12:40.626 "start": 0, 00:12:40.626 "length": 8192 00:12:40.626 }, 00:12:40.626 "queue_depth": 128, 00:12:40.626 "io_size": 4096, 00:12:40.626 "runtime": 1.00947, 00:12:40.626 "iops": 6164.621038762915, 00:12:40.626 "mibps": 24.080550932667638, 00:12:40.626 "io_failed": 0, 00:12:40.626 "io_timeout": 0, 00:12:40.626 "avg_latency_us": 20653.483186689577, 00:12:40.626 "min_latency_us": 2596.233846153846, 00:12:40.626 "max_latency_us": 16434.412307692306 00:12:40.627 } 00:12:40.627 ], 00:12:40.627 "core_count": 1 00:12:40.627 } 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 71314 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71314 ']' 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71314 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71314 00:12:40.627 killing process with pid 71314 00:12:40.627 Received shutdown signal, test time was about 1.000000 seconds 00:12:40.627 00:12:40.627 Latency(us) 00:12:40.627 [2024-11-26T20:35:55.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.627 [2024-11-26T20:35:55.182Z] =================================================================================================================== 00:12:40.627 [2024-11-26T20:35:55.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71314' 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71314 00:12:40.627 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71314 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 71259 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71259 ']' 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71259 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71259 00:12:40.887 killing process with pid 71259 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71259' 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71259 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71259 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71360 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71360 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71360 ']' 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.887 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 [2024-11-26 20:35:55.403417] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:40.887 [2024-11-26 20:35:55.403638] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.177 [2024-11-26 20:35:55.542188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.177 [2024-11-26 20:35:55.575105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.177 [2024-11-26 20:35:55.575144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.177 [2024-11-26 20:35:55.575150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.177 [2024-11-26 20:35:55.575155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.177 [2024-11-26 20:35:55.575159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.177 [2024-11-26 20:35:55.575379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.177 [2024-11-26 20:35:55.607087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.743 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.743 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:41.743 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.743 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.743 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.000 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.001 [2024-11-26 20:35:56.316486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.001 malloc0 00:12:42.001 [2024-11-26 20:35:56.342709] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:42.001 [2024-11-26 20:35:56.342866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:42.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=71392 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 71392 /var/tmp/bdevperf.sock 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71392 ']' 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.001 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.001 [2024-11-26 20:35:56.409681] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:42.001 [2024-11-26 20:35:56.409751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71392 ] 00:12:42.001 [2024-11-26 20:35:56.542179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.258 [2024-11-26 20:35:56.578826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.258 [2024-11-26 20:35:56.611792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.821 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.821 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:42.821 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JJQPdU3TEO 00:12:43.079 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:43.336 [2024-11-26 20:35:57.706092] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:43.336 nvme0n1 00:12:43.336 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:43.336 Running I/O for 1 seconds... 00:12:44.711 6278.00 IOPS, 24.52 MiB/s 00:12:44.711 Latency(us) 00:12:44.711 [2024-11-26T20:35:59.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.711 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.711 Verification LBA range: start 0x0 length 0x2000 00:12:44.711 nvme0n1 : 1.01 6344.90 24.78 0.00 0.00 20065.74 2898.71 15829.46 00:12:44.711 [2024-11-26T20:35:59.266Z] =================================================================================================================== 00:12:44.711 [2024-11-26T20:35:59.266Z] Total : 6344.90 24.78 0.00 0.00 20065.74 2898.71 15829.46 00:12:44.711 { 00:12:44.711 "results": [ 00:12:44.711 { 00:12:44.711 "job": "nvme0n1", 00:12:44.711 "core_mask": "0x2", 00:12:44.711 "workload": "verify", 00:12:44.711 "status": "finished", 00:12:44.711 "verify_range": { 00:12:44.711 "start": 0, 00:12:44.711 "length": 8192 00:12:44.711 }, 00:12:44.711 "queue_depth": 128, 00:12:44.711 "io_size": 4096, 00:12:44.711 "runtime": 1.009629, 00:12:44.711 "iops": 6344.904910615682, 00:12:44.711 "mibps": 24.784784807092507, 00:12:44.711 "io_failed": 0, 00:12:44.711 "io_timeout": 0, 00:12:44.711 "avg_latency_us": 20065.73827805663, 00:12:44.711 "min_latency_us": 2898.7076923076925, 00:12:44.711 "max_latency_us": 15829.464615384615 00:12:44.711 } 00:12:44.711 ], 00:12:44.711 "core_count": 1 00:12:44.711 } 00:12:44.711 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:12:44.711 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.711 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.711 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:12:44.711 "subsystems": [ 00:12:44.711 { 00:12:44.711 "subsystem": "keyring", 00:12:44.711 "config": [ 00:12:44.711 { 00:12:44.711 "method": "keyring_file_add_key", 00:12:44.711 "params": { 00:12:44.711 "name": "key0", 00:12:44.711 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:44.711 } 00:12:44.711 } 00:12:44.711 ] 00:12:44.711 }, 00:12:44.711 { 00:12:44.711 "subsystem": "iobuf", 00:12:44.711 "config": [ 00:12:44.711 { 00:12:44.711 "method": "iobuf_set_options", 00:12:44.711 "params": { 00:12:44.711 "small_pool_count": 8192, 00:12:44.711 "large_pool_count": 1024, 00:12:44.711 "small_bufsize": 8192, 00:12:44.711 "large_bufsize": 135168, 00:12:44.711 "enable_numa": false 00:12:44.711 } 00:12:44.711 } 00:12:44.711 ] 00:12:44.711 }, 00:12:44.711 { 00:12:44.711 "subsystem": "sock", 00:12:44.711 "config": [ 00:12:44.711 { 00:12:44.711 "method": "sock_set_default_impl", 00:12:44.711 "params": { 00:12:44.711 "impl_name": "uring" 00:12:44.711 } 00:12:44.711 }, 00:12:44.711 { 00:12:44.711 "method": "sock_impl_set_options", 00:12:44.711 "params": { 00:12:44.711 "impl_name": "ssl", 00:12:44.711 "recv_buf_size": 4096, 00:12:44.711 "send_buf_size": 4096, 00:12:44.711 "enable_recv_pipe": true, 00:12:44.711 "enable_quickack": false, 00:12:44.711 "enable_placement_id": 0, 00:12:44.711 "enable_zerocopy_send_server": true, 00:12:44.711 "enable_zerocopy_send_client": false, 00:12:44.711 "zerocopy_threshold": 0, 00:12:44.711 "tls_version": 0, 00:12:44.711 "enable_ktls": false 00:12:44.711 } 00:12:44.711 }, 00:12:44.711 { 00:12:44.711 "method": "sock_impl_set_options", 00:12:44.711 "params": { 00:12:44.711 "impl_name": "posix", 00:12:44.711 "recv_buf_size": 2097152, 00:12:44.711 "send_buf_size": 2097152, 00:12:44.711 "enable_recv_pipe": true, 00:12:44.711 "enable_quickack": false, 00:12:44.711 "enable_placement_id": 0, 00:12:44.711 "enable_zerocopy_send_server": true, 00:12:44.711 "enable_zerocopy_send_client": false, 00:12:44.711 "zerocopy_threshold": 0, 00:12:44.711 "tls_version": 0, 00:12:44.711 "enable_ktls": false 00:12:44.711 } 00:12:44.711 }, 00:12:44.711 { 00:12:44.711 "method": "sock_impl_set_options", 00:12:44.711 "params": { 00:12:44.711 "impl_name": "uring", 00:12:44.711 "recv_buf_size": 2097152, 00:12:44.711 "send_buf_size": 2097152, 00:12:44.712 "enable_recv_pipe": true, 00:12:44.712 "enable_quickack": false, 00:12:44.712 "enable_placement_id": 0, 00:12:44.712 "enable_zerocopy_send_server": false, 00:12:44.712 "enable_zerocopy_send_client": false, 00:12:44.712 "zerocopy_threshold": 0, 00:12:44.712 "tls_version": 0, 00:12:44.712 "enable_ktls": false 00:12:44.712 } 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "vmd", 00:12:44.712 "config": [] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "accel", 00:12:44.712 "config": [ 00:12:44.712 { 00:12:44.712 "method": "accel_set_options", 00:12:44.712 "params": { 00:12:44.712 "small_cache_size": 128, 00:12:44.712 "large_cache_size": 16, 00:12:44.712 "task_count": 2048, 00:12:44.712 "sequence_count": 2048, 00:12:44.712 "buf_count": 2048 00:12:44.712 } 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "bdev", 00:12:44.712 "config": [ 00:12:44.712 { 00:12:44.712 "method": "bdev_set_options", 00:12:44.712 "params": { 00:12:44.712 "bdev_io_pool_size": 65535, 00:12:44.712 "bdev_io_cache_size": 256, 00:12:44.712 "bdev_auto_examine": true, 00:12:44.712 "iobuf_small_cache_size": 128, 00:12:44.712 "iobuf_large_cache_size": 16 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "bdev_raid_set_options", 00:12:44.712 "params": { 00:12:44.712 "process_window_size_kb": 1024, 00:12:44.712 "process_max_bandwidth_mb_sec": 0 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "bdev_iscsi_set_options", 00:12:44.712 "params": { 00:12:44.712 "timeout_sec": 30 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "bdev_nvme_set_options", 00:12:44.712 "params": { 00:12:44.712 "action_on_timeout": "none", 00:12:44.712 "timeout_us": 0, 00:12:44.712 "timeout_admin_us": 0, 00:12:44.712 "keep_alive_timeout_ms": 10000, 00:12:44.712 "arbitration_burst": 0, 00:12:44.712 "low_priority_weight": 0, 00:12:44.712 "medium_priority_weight": 0, 00:12:44.712 "high_priority_weight": 0, 00:12:44.712 "nvme_adminq_poll_period_us": 10000, 00:12:44.712 "nvme_ioq_poll_period_us": 0, 00:12:44.712 "io_queue_requests": 0, 00:12:44.712 "delay_cmd_submit": true, 00:12:44.712 "transport_retry_count": 4, 00:12:44.712 "bdev_retry_count": 3, 00:12:44.712 "transport_ack_timeout": 0, 00:12:44.712 "ctrlr_loss_timeout_sec": 0, 00:12:44.712 "reconnect_delay_sec": 0, 00:12:44.712 "fast_io_fail_timeout_sec": 0, 00:12:44.712 "disable_auto_failback": false, 00:12:44.712 "generate_uuids": false, 00:12:44.712 "transport_tos": 0, 00:12:44.712 "nvme_error_stat": false, 00:12:44.712 "rdma_srq_size": 0, 00:12:44.712 "io_path_stat": false, 00:12:44.712 "allow_accel_sequence": false, 00:12:44.712 "rdma_max_cq_size": 0, 00:12:44.712 "rdma_cm_event_timeout_ms": 0, 00:12:44.712 "dhchap_digests": [ 00:12:44.712 "sha256", 00:12:44.712 "sha384", 00:12:44.712 "sha512" 00:12:44.712 ], 00:12:44.712 "dhchap_dhgroups": [ 00:12:44.712 "null", 00:12:44.712 "ffdhe2048", 00:12:44.712 "ffdhe3072", 00:12:44.712 "ffdhe4096", 00:12:44.712 "ffdhe6144", 00:12:44.712 "ffdhe8192" 00:12:44.712 ] 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "bdev_nvme_set_hotplug", 00:12:44.712 "params": { 00:12:44.712 "period_us": 100000, 00:12:44.712 "enable": false 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "bdev_malloc_create", 00:12:44.712 "params": { 00:12:44.712 "name": "malloc0", 00:12:44.712 "num_blocks": 8192, 00:12:44.712 "block_size": 4096, 00:12:44.712 "physical_block_size": 4096, 00:12:44.712 "uuid": "7391bbf1-6211-46c5-8113-72ccfa3332ca", 00:12:44.712 "optimal_io_boundary": 0, 00:12:44.712 "md_size": 0, 00:12:44.712 "dif_type": 0, 00:12:44.712 "dif_is_head_of_md": false, 00:12:44.712 "dif_pi_format": 0 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "bdev_wait_for_examine" 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "nbd", 00:12:44.712 "config": [] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "scheduler", 00:12:44.712 "config": [ 00:12:44.712 { 00:12:44.712 "method": "framework_set_scheduler", 00:12:44.712 "params": { 00:12:44.712 "name": "static" 00:12:44.712 } 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "nvmf", 00:12:44.712 "config": [ 00:12:44.712 { 00:12:44.712 "method": "nvmf_set_config", 00:12:44.712 "params": { 00:12:44.712 "discovery_filter": "match_any", 00:12:44.712 "admin_cmd_passthru": { 00:12:44.712 "identify_ctrlr": false 00:12:44.712 }, 00:12:44.712 "dhchap_digests": [ 00:12:44.712 "sha256", 00:12:44.712 "sha384", 00:12:44.712 "sha512" 00:12:44.712 ], 00:12:44.712 "dhchap_dhgroups": [ 00:12:44.712 "null", 00:12:44.712 "ffdhe2048", 00:12:44.712 "ffdhe3072", 00:12:44.712 "ffdhe4096", 00:12:44.712 "ffdhe6144", 00:12:44.712 "ffdhe8192" 00:12:44.712 ] 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "nvmf_set_max_subsystems", 00:12:44.712 "params": { 00:12:44.712 "max_subsystems": 1024 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "nvmf_set_crdt", 00:12:44.712 "params": { 00:12:44.712 "crdt1": 0, 00:12:44.712 "crdt2": 0, 00:12:44.712 "crdt3": 0 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "nvmf_create_transport", 00:12:44.712 "params": { 00:12:44.712 "trtype": "TCP", 00:12:44.712 "max_queue_depth": 128, 00:12:44.712 "max_io_qpairs_per_ctrlr": 127, 00:12:44.712 "in_capsule_data_size": 4096, 00:12:44.712 "max_io_size": 131072, 00:12:44.712 "io_unit_size": 131072, 00:12:44.712 "max_aq_depth": 128, 00:12:44.712 "num_shared_buffers": 511, 00:12:44.712 "buf_cache_size": 4294967295, 00:12:44.712 "dif_insert_or_strip": false, 00:12:44.712 "zcopy": false, 00:12:44.712 "c2h_success": false, 00:12:44.712 "sock_priority": 0, 00:12:44.712 "abort_timeout_sec": 1, 00:12:44.712 "ack_timeout": 0, 00:12:44.712 "data_wr_pool_size": 0 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "nvmf_create_subsystem", 00:12:44.712 "params": { 00:12:44.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.712 "allow_any_host": false, 00:12:44.712 "serial_number": "00000000000000000000", 00:12:44.712 "model_number": "SPDK bdev Controller", 00:12:44.712 "max_namespaces": 32, 00:12:44.712 "min_cntlid": 1, 00:12:44.712 "max_cntlid": 65519, 00:12:44.712 "ana_reporting": false 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "nvmf_subsystem_add_host", 00:12:44.712 "params": { 00:12:44.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.712 "host": "nqn.2016-06.io.spdk:host1", 00:12:44.712 "psk": "key0" 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "nvmf_subsystem_add_ns", 00:12:44.712 "params": { 00:12:44.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.712 "namespace": { 00:12:44.712 "nsid": 1, 00:12:44.712 "bdev_name": "malloc0", 00:12:44.712 "nguid": "7391BBF1621146C5811372CCFA3332CA", 00:12:44.712 "uuid": "7391bbf1-6211-46c5-8113-72ccfa3332ca", 00:12:44.712 "no_auto_visible": false 00:12:44.712 } 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "method": "nvmf_subsystem_add_listener", 00:12:44.712 "params": { 00:12:44.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.712 "listen_address": { 00:12:44.712 "trtype": "TCP", 00:12:44.712 "adrfam": "IPv4", 00:12:44.712 "traddr": "10.0.0.3", 00:12:44.712 "trsvcid": "4420" 00:12:44.712 }, 00:12:44.712 "secure_channel": false, 00:12:44.712 "sock_impl": "ssl" 00:12:44.712 } 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }' 00:12:44.712 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:44.712 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:12:44.712 "subsystems": [ 00:12:44.712 { 00:12:44.712 "subsystem": "keyring", 00:12:44.712 "config": [ 00:12:44.712 { 00:12:44.712 "method": "keyring_file_add_key", 00:12:44.712 "params": { 00:12:44.712 "name": "key0", 00:12:44.712 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:44.712 } 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "iobuf", 00:12:44.712 "config": [ 00:12:44.712 { 00:12:44.712 "method": "iobuf_set_options", 00:12:44.712 "params": { 00:12:44.712 "small_pool_count": 8192, 00:12:44.712 "large_pool_count": 1024, 00:12:44.712 "small_bufsize": 8192, 00:12:44.712 "large_bufsize": 135168, 00:12:44.712 "enable_numa": false 00:12:44.712 } 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "subsystem": "sock", 00:12:44.713 "config": [ 00:12:44.713 { 00:12:44.713 "method": "sock_set_default_impl", 00:12:44.713 "params": { 00:12:44.713 "impl_name": "uring" 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "sock_impl_set_options", 00:12:44.713 "params": { 00:12:44.713 "impl_name": "ssl", 00:12:44.713 "recv_buf_size": 4096, 00:12:44.713 "send_buf_size": 4096, 00:12:44.713 "enable_recv_pipe": true, 00:12:44.713 "enable_quickack": false, 00:12:44.713 "enable_placement_id": 0, 00:12:44.713 "enable_zerocopy_send_server": true, 00:12:44.713 "enable_zerocopy_send_client": false, 00:12:44.713 "zerocopy_threshold": 0, 00:12:44.713 "tls_version": 0, 00:12:44.713 "enable_ktls": false 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "sock_impl_set_options", 00:12:44.713 "params": { 00:12:44.713 "impl_name": "posix", 00:12:44.713 "recv_buf_size": 2097152, 00:12:44.713 "send_buf_size": 2097152, 00:12:44.713 "enable_recv_pipe": true, 00:12:44.713 "enable_quickack": false, 00:12:44.713 "enable_placement_id": 0, 00:12:44.713 "enable_zerocopy_send_server": true, 00:12:44.713 "enable_zerocopy_send_client": false, 00:12:44.713 "zerocopy_threshold": 0, 00:12:44.713 "tls_version": 0, 00:12:44.713 "enable_ktls": false 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "sock_impl_set_options", 00:12:44.713 "params": { 00:12:44.713 "impl_name": "uring", 00:12:44.713 "recv_buf_size": 2097152, 00:12:44.713 "send_buf_size": 2097152, 00:12:44.713 "enable_recv_pipe": true, 00:12:44.713 "enable_quickack": false, 00:12:44.713 "enable_placement_id": 0, 00:12:44.713 "enable_zerocopy_send_server": false, 00:12:44.713 "enable_zerocopy_send_client": false, 00:12:44.713 "zerocopy_threshold": 0, 00:12:44.713 "tls_version": 0, 00:12:44.713 "enable_ktls": false 00:12:44.713 } 00:12:44.713 } 00:12:44.713 ] 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "subsystem": "vmd", 00:12:44.713 "config": [] 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "subsystem": "accel", 00:12:44.713 "config": [ 00:12:44.713 { 00:12:44.713 "method": "accel_set_options", 00:12:44.713 "params": { 00:12:44.713 "small_cache_size": 128, 00:12:44.713 "large_cache_size": 16, 00:12:44.713 "task_count": 2048, 00:12:44.713 "sequence_count": 2048, 00:12:44.713 "buf_count": 2048 00:12:44.713 } 00:12:44.713 } 00:12:44.713 ] 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "subsystem": "bdev", 00:12:44.713 "config": [ 00:12:44.713 { 00:12:44.713 "method": "bdev_set_options", 00:12:44.713 "params": { 00:12:44.713 "bdev_io_pool_size": 65535, 00:12:44.713 "bdev_io_cache_size": 256, 00:12:44.713 "bdev_auto_examine": true, 00:12:44.713 "iobuf_small_cache_size": 128, 00:12:44.713 "iobuf_large_cache_size": 16 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "bdev_raid_set_options", 00:12:44.713 "params": { 00:12:44.713 "process_window_size_kb": 1024, 00:12:44.713 "process_max_bandwidth_mb_sec": 0 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "bdev_iscsi_set_options", 00:12:44.713 "params": { 00:12:44.713 "timeout_sec": 30 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "bdev_nvme_set_options", 00:12:44.713 "params": { 00:12:44.713 "action_on_timeout": "none", 00:12:44.713 "timeout_us": 0, 00:12:44.713 "timeout_admin_us": 0, 00:12:44.713 "keep_alive_timeout_ms": 10000, 00:12:44.713 "arbitration_burst": 0, 00:12:44.713 "low_priority_weight": 0, 00:12:44.713 "medium_priority_weight": 0, 00:12:44.713 "high_priority_weight": 0, 00:12:44.713 "nvme_adminq_poll_period_us": 10000, 00:12:44.713 "nvme_ioq_poll_period_us": 0, 00:12:44.713 "io_queue_requests": 512, 00:12:44.713 "delay_cmd_submit": true, 00:12:44.713 "transport_retry_count": 4, 00:12:44.713 "bdev_retry_count": 3, 00:12:44.713 "transport_ack_timeout": 0, 00:12:44.713 "ctrlr_loss_timeout_sec": 0, 00:12:44.713 "reconnect_delay_sec": 0, 00:12:44.713 "fast_io_fail_timeout_sec": 0, 00:12:44.713 "disable_auto_failback": false, 00:12:44.713 "generate_uuids": false, 00:12:44.713 "transport_tos": 0, 00:12:44.713 "nvme_error_stat": false, 00:12:44.713 "rdma_srq_size": 0, 00:12:44.713 "io_path_stat": false, 00:12:44.713 "allow_accel_sequence": false, 00:12:44.713 "rdma_max_cq_size": 0, 00:12:44.713 "rdma_cm_event_timeout_ms": 0, 00:12:44.713 "dhchap_digests": [ 00:12:44.713 "sha256", 00:12:44.713 "sha384", 00:12:44.713 "sha512" 00:12:44.713 ], 00:12:44.713 "dhchap_dhgroups": [ 00:12:44.713 "null", 00:12:44.713 "ffdhe2048", 00:12:44.713 "ffdhe3072", 00:12:44.713 "ffdhe4096", 00:12:44.713 "ffdhe6144", 00:12:44.713 "ffdhe8192" 00:12:44.713 ] 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "bdev_nvme_attach_controller", 00:12:44.713 "params": { 00:12:44.713 "name": "nvme0", 00:12:44.713 "trtype": "TCP", 00:12:44.713 "adrfam": "IPv4", 00:12:44.713 "traddr": "10.0.0.3", 00:12:44.713 "trsvcid": "4420", 00:12:44.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.713 "prchk_reftag": false, 00:12:44.713 "prchk_guard": false, 00:12:44.713 "ctrlr_loss_timeout_sec": 0, 00:12:44.713 "reconnect_delay_sec": 0, 00:12:44.713 "fast_io_fail_timeout_sec": 0, 00:12:44.713 "psk": "key0", 00:12:44.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:44.713 "hdgst": false, 00:12:44.713 "ddgst": false, 00:12:44.713 "multipath": "multipath" 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "bdev_nvme_set_hotplug", 00:12:44.713 "params": { 00:12:44.713 "period_us": 100000, 00:12:44.713 "enable": false 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "bdev_enable_histogram", 00:12:44.713 "params": { 00:12:44.713 "name": "nvme0n1", 00:12:44.713 "enable": true 00:12:44.713 } 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "method": "bdev_wait_for_examine" 00:12:44.713 } 00:12:44.713 ] 00:12:44.713 }, 00:12:44.713 { 00:12:44.713 "subsystem": "nbd", 00:12:44.713 "config": [] 00:12:44.713 } 00:12:44.713 ] 00:12:44.713 }' 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 71392 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71392 ']' 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71392 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71392 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71392' 00:12:44.713 killing process with pid 71392 00:12:44.713 Received shutdown signal, test time was about 1.000000 seconds 00:12:44.713 00:12:44.713 Latency(us) 00:12:44.713 [2024-11-26T20:35:59.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.713 [2024-11-26T20:35:59.268Z] =================================================================================================================== 00:12:44.713 [2024-11-26T20:35:59.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71392 00:12:44.713 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71392 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 71360 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71360 ']' 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71360 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71360 00:12:44.973 killing process with pid 71360 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71360' 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71360 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71360 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.973 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:12:44.973 "subsystems": [ 00:12:44.973 { 00:12:44.973 "subsystem": "keyring", 00:12:44.973 "config": [ 00:12:44.973 { 00:12:44.973 "method": "keyring_file_add_key", 00:12:44.973 "params": { 00:12:44.973 "name": "key0", 00:12:44.973 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:44.973 } 00:12:44.973 } 00:12:44.973 ] 00:12:44.973 }, 00:12:44.973 { 00:12:44.973 "subsystem": "iobuf", 00:12:44.973 "config": [ 00:12:44.973 { 00:12:44.973 "method": "iobuf_set_options", 00:12:44.973 "params": { 00:12:44.973 "small_pool_count": 8192, 00:12:44.973 "large_pool_count": 1024, 00:12:44.973 "small_bufsize": 8192, 00:12:44.973 "large_bufsize": 135168, 00:12:44.973 "enable_numa": false 00:12:44.973 } 00:12:44.973 } 00:12:44.973 ] 00:12:44.973 }, 00:12:44.973 { 00:12:44.973 "subsystem": "sock", 00:12:44.973 "config": [ 00:12:44.973 { 00:12:44.973 "method": "sock_set_default_impl", 00:12:44.973 "params": { 00:12:44.973 "impl_name": "uring" 00:12:44.973 } 00:12:44.973 }, 00:12:44.973 { 00:12:44.973 "method": "sock_impl_set_options", 00:12:44.973 "params": { 00:12:44.973 "impl_name": "ssl", 00:12:44.973 "recv_buf_size": 4096, 00:12:44.973 "send_buf_size": 4096, 00:12:44.973 "enable_recv_pipe": true, 00:12:44.973 "enable_quickack": false, 00:12:44.973 "enable_placement_id": 0, 00:12:44.973 "enable_zerocopy_send_server": true, 00:12:44.973 "enable_zerocopy_send_client": false, 00:12:44.973 "zerocopy_threshold": 0, 00:12:44.973 "tls_version": 0, 00:12:44.973 "enable_ktls": false 00:12:44.973 } 00:12:44.973 }, 00:12:44.973 { 00:12:44.973 "method": "sock_impl_set_options", 00:12:44.973 "params": { 00:12:44.973 "impl_name": "posix", 00:12:44.973 "recv_buf_size": 2097152, 00:12:44.973 "send_buf_size": 2097152, 00:12:44.973 "enable_recv_pipe": true, 00:12:44.973 "enable_quickack": false, 00:12:44.973 "enable_placement_id": 0, 00:12:44.973 "enable_zerocopy_send_server": true, 00:12:44.973 "enable_zerocopy_send_client": false, 00:12:44.973 "zerocopy_threshold": 0, 00:12:44.973 "tls_version": 0, 00:12:44.973 "enable_ktls": false 00:12:44.973 } 00:12:44.973 }, 00:12:44.973 { 00:12:44.973 "method": "sock_impl_set_options", 00:12:44.973 "params": { 00:12:44.973 "impl_name": "uring", 00:12:44.973 "recv_buf_size": 2097152, 00:12:44.974 "send_buf_size": 2097152, 00:12:44.974 "enable_recv_pipe": true, 00:12:44.974 "enable_quickack": false, 00:12:44.974 "enable_placement_id": 0, 00:12:44.974 "enable_zerocopy_send_server": false, 00:12:44.974 "enable_zerocopy_send_client": false, 00:12:44.974 "zerocopy_threshold": 0, 00:12:44.974 "tls_version": 0, 00:12:44.974 "enable_ktls": false 00:12:44.974 } 00:12:44.974 } 00:12:44.974 ] 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "subsystem": "vmd", 00:12:44.974 "config": [] 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "subsystem": "accel", 00:12:44.974 "config": [ 00:12:44.974 { 00:12:44.974 "method": "accel_set_options", 00:12:44.974 "params": { 00:12:44.974 "small_cache_size": 128, 00:12:44.974 "large_cache_size": 16, 00:12:44.974 "task_count": 2048, 00:12:44.974 "sequence_count": 2048, 00:12:44.974 "buf_count": 2048 00:12:44.974 } 00:12:44.974 } 00:12:44.974 ] 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "subsystem": "bdev", 00:12:44.974 "config": [ 00:12:44.974 { 00:12:44.974 "method": "bdev_set_options", 00:12:44.974 "params": { 00:12:44.974 "bdev_io_pool_size": 65535, 00:12:44.974 "bdev_io_cache_size": 256, 00:12:44.974 "bdev_auto_examine": true, 00:12:44.974 "iobuf_small_cache_size": 128, 00:12:44.974 "iobuf_large_cache_size": 16 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "bdev_raid_set_options", 00:12:44.974 "params": { 00:12:44.974 "process_window_size_kb": 1024, 00:12:44.974 "process_max_bandwidth_mb_sec": 0 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "bdev_iscsi_set_options", 00:12:44.974 "params": { 00:12:44.974 "timeout_sec": 30 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "bdev_nvme_set_options", 00:12:44.974 "params": { 00:12:44.974 "action_on_timeout": "none", 00:12:44.974 "timeout_us": 0, 00:12:44.974 "timeout_admin_us": 0, 00:12:44.974 "keep_alive_timeout_ms": 10000, 00:12:44.974 "arbitration_burst": 0, 00:12:44.974 "low_priority_weight": 0, 00:12:44.974 "medium_priority_weight": 0, 00:12:44.974 "high_priority_weight": 0, 00:12:44.974 "nvme_adminq_poll_period_us": 10000, 00:12:44.974 "nvme_ioq_poll_period_us": 0, 00:12:44.974 "io_queue_requests": 0, 00:12:44.974 "delay_cmd_submit": true, 00:12:44.974 "transport_retry_count": 4, 00:12:44.974 "bdev_retry_count": 3, 00:12:44.974 "transport_ack_timeout": 0, 00:12:44.974 "ctrlr_loss_timeout_sec": 0, 00:12:44.974 "reconnect_delay_sec": 0, 00:12:44.974 "fast_io_fail_timeout_sec": 0, 00:12:44.974 "disable_auto_failback": false, 00:12:44.974 "generate_uuids": false, 00:12:44.974 "transport_tos": 0, 00:12:44.974 "nvme_error_stat": false, 00:12:44.974 "rdma_srq_size": 0, 00:12:44.974 "io_path_stat": false, 00:12:44.974 "allow_accel_sequence": false, 00:12:44.974 "rdma_max_cq_size": 0, 00:12:44.974 "rdma_cm_event_timeout_ms": 0, 00:12:44.974 "dhchap_digests": [ 00:12:44.974 "sha256", 00:12:44.974 "sha384", 00:12:44.974 "sha512" 00:12:44.974 ], 00:12:44.974 "dhchap_dhgroups": [ 00:12:44.974 "null", 00:12:44.974 "ffdhe2048", 00:12:44.974 "ffdhe3072", 00:12:44.974 "ffdhe4096", 00:12:44.974 "ffdhe6144", 00:12:44.974 "ffdhe8192" 00:12:44.974 ] 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "bdev_nvme_set_hotplug", 00:12:44.974 "params": { 00:12:44.974 "period_us": 100000, 00:12:44.974 "enable": false 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "bdev_malloc_create", 00:12:44.974 "params": { 00:12:44.974 "name": "malloc0", 00:12:44.974 "num_blocks": 8192, 00:12:44.974 "block_size": 4096, 00:12:44.974 "physical_block_size": 4096, 00:12:44.974 "uuid": "7391bbf1-6211-46c5-8113-72ccfa3332ca", 00:12:44.974 "optimal_io_boundary": 0, 00:12:44.974 "md_size": 0, 00:12:44.974 "dif_type": 0, 00:12:44.974 "dif_is_head_of_md": false, 00:12:44.974 "dif_pi_format": 0 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "bdev_wait_for_examine" 00:12:44.974 } 00:12:44.974 ] 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "subsystem": "nbd", 00:12:44.974 "config": [] 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "subsystem": "scheduler", 00:12:44.974 "config": [ 00:12:44.974 { 00:12:44.974 "method": "framework_set_scheduler", 00:12:44.974 "params": { 00:12:44.974 "name": "static" 00:12:44.974 } 00:12:44.974 } 00:12:44.974 ] 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "subsystem": "nvmf", 00:12:44.974 "config": [ 00:12:44.974 { 00:12:44.974 "method": "nvmf_set_config", 00:12:44.974 "params": { 00:12:44.974 "discovery_filter": "match_any", 00:12:44.974 "admin_cmd_passthru": { 00:12:44.974 "identify_ctrlr": false 00:12:44.974 }, 00:12:44.974 "dhchap_digests": [ 00:12:44.974 "sha256", 00:12:44.974 "sha384", 00:12:44.974 "sha512" 00:12:44.974 ], 00:12:44.974 "dhchap_dhgroups": [ 00:12:44.974 "null", 00:12:44.974 "ffdhe2048", 00:12:44.974 "ffdhe3072", 00:12:44.974 "ffdhe4096", 00:12:44.974 "ffdhe6144", 00:12:44.974 "ffdhe8192" 00:12:44.974 ] 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "nvmf_set_max_subsystems", 00:12:44.974 "params": { 00:12:44.974 "max_subsystems": 1024 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "nvmf_set_crdt", 00:12:44.974 "params": { 00:12:44.974 "crdt1": 0, 00:12:44.974 "crdt2": 0, 00:12:44.974 "crdt3": 0 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "nvmf_create_transport", 00:12:44.974 "params": { 00:12:44.974 "trtype": "TCP", 00:12:44.974 "max_queue_depth": 128, 00:12:44.974 "max_io_qpairs_per_ctrlr": 127, 00:12:44.974 "in_capsule_data_size": 4096, 00:12:44.974 "max_io_size": 131072, 00:12:44.974 "io_unit_size": 131072, 00:12:44.974 "max_aq_depth": 128, 00:12:44.974 "num_shared_buffers": 511, 00:12:44.974 "buf_cache_size": 4294967295, 00:12:44.974 "dif_insert_or_strip": false, 00:12:44.974 "zcopy": false, 00:12:44.974 "c2h_success": false, 00:12:44.974 "sock_priority": 0, 00:12:44.974 "abort_timeout_sec": 1, 00:12:44.974 "ack_timeout": 0, 00:12:44.974 "data_wr_pool_size": 0 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "nvmf_create_subsystem", 00:12:44.974 "params": { 00:12:44.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.974 "allow_any_host": false, 00:12:44.974 "serial_number": "00000000000000000000", 00:12:44.974 "model_number": "SPDK bdev Controller", 00:12:44.974 "max_namespaces": 32, 00:12:44.974 "min_cntlid": 1, 00:12:44.974 "max_cntlid": 65519, 00:12:44.974 "ana_reporting": false 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "nvmf_subsystem_add_host", 00:12:44.974 "params": { 00:12:44.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.974 "host": "nqn.2016-06.io.spdk:host1", 00:12:44.974 "psk": "key0" 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "nvmf_subsystem_add_ns", 00:12:44.974 "params": { 00:12:44.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.974 "namespace": { 00:12:44.974 "nsid": 1, 00:12:44.974 "bdev_name": "malloc0", 00:12:44.974 "nguid": "7391BBF1621146C5811372CCFA3332CA", 00:12:44.974 "uuid": "7391bbf1-6211-46c5-8113-72ccfa3332ca", 00:12:44.974 "no_auto_visible": false 00:12:44.974 } 00:12:44.974 } 00:12:44.974 }, 00:12:44.974 { 00:12:44.974 "method": "nvmf_subsystem_add_listener", 00:12:44.974 "params": { 00:12:44.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.974 "listen_address": { 00:12:44.974 "trtype": "TCP", 00:12:44.974 "adrfam": "IPv4", 00:12:44.974 "traddr": "10.0.0.3", 00:12:44.974 "trsvcid": "4420" 00:12:44.974 }, 00:12:44.974 "secure_channel": false, 00:12:44.974 "sock_impl": "ssl" 00:12:44.974 } 00:12:44.974 } 00:12:44.974 ] 00:12:44.974 } 00:12:44.974 ] 00:12:44.974 }' 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:44.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71447 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71447 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71447 ']' 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:12:44.974 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:45.233 [2024-11-26 20:35:59.533727] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:45.233 [2024-11-26 20:35:59.533782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.233 [2024-11-26 20:35:59.662143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.233 [2024-11-26 20:35:59.693230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.233 [2024-11-26 20:35:59.693387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.233 [2024-11-26 20:35:59.693829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.233 [2024-11-26 20:35:59.693889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.233 [2024-11-26 20:35:59.693904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.233 [2024-11-26 20:35:59.694161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.490 [2024-11-26 20:35:59.837165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:45.490 [2024-11-26 20:35:59.901688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.490 [2024-11-26 20:35:59.933641] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:45.490 [2024-11-26 20:35:59.933793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:46.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=71479 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 71479 /var/tmp/bdevperf.sock 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71479 ']' 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:46.055 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:12:46.056 "subsystems": [ 00:12:46.056 { 00:12:46.056 "subsystem": "keyring", 00:12:46.056 "config": [ 00:12:46.056 { 00:12:46.056 "method": "keyring_file_add_key", 00:12:46.056 "params": { 00:12:46.056 "name": "key0", 00:12:46.056 "path": "/tmp/tmp.JJQPdU3TEO" 00:12:46.056 } 00:12:46.056 } 00:12:46.056 ] 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "subsystem": "iobuf", 00:12:46.056 "config": [ 00:12:46.056 { 00:12:46.056 "method": "iobuf_set_options", 00:12:46.056 "params": { 00:12:46.056 "small_pool_count": 8192, 00:12:46.056 "large_pool_count": 1024, 00:12:46.056 "small_bufsize": 8192, 00:12:46.056 "large_bufsize": 135168, 00:12:46.056 "enable_numa": false 00:12:46.056 } 00:12:46.056 } 00:12:46.056 ] 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "subsystem": "sock", 00:12:46.056 "config": [ 00:12:46.056 { 00:12:46.056 "method": "sock_set_default_impl", 00:12:46.056 "params": { 00:12:46.056 "impl_name": "uring" 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "sock_impl_set_options", 00:12:46.056 "params": { 00:12:46.056 "impl_name": "ssl", 00:12:46.056 "recv_buf_size": 4096, 00:12:46.056 "send_buf_size": 4096, 00:12:46.056 "enable_recv_pipe": true, 00:12:46.056 "enable_quickack": false, 00:12:46.056 "enable_placement_id": 0, 00:12:46.056 "enable_zerocopy_send_server": true, 00:12:46.056 "enable_zerocopy_send_client": false, 00:12:46.056 "zerocopy_threshold": 0, 00:12:46.056 "tls_version": 0, 00:12:46.056 "enable_ktls": false 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "sock_impl_set_options", 00:12:46.056 "params": { 00:12:46.056 "impl_name": "posix", 00:12:46.056 "recv_buf_size": 2097152, 00:12:46.056 "send_buf_size": 2097152, 00:12:46.056 "enable_recv_pipe": true, 00:12:46.056 "enable_quickack": false, 00:12:46.056 "enable_placement_id": 0, 00:12:46.056 "enable_zerocopy_send_server": true, 00:12:46.056 "enable_zerocopy_send_client": false, 00:12:46.056 "zerocopy_threshold": 0, 00:12:46.056 "tls_version": 0, 00:12:46.056 "enable_ktls": false 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "sock_impl_set_options", 00:12:46.056 "params": { 00:12:46.056 "impl_name": "uring", 00:12:46.056 "recv_buf_size": 2097152, 00:12:46.056 "send_buf_size": 2097152, 00:12:46.056 "enable_recv_pipe": true, 00:12:46.056 "enable_quickack": false, 00:12:46.056 "enable_placement_id": 0, 00:12:46.056 "enable_zerocopy_send_server": false, 00:12:46.056 "enable_zerocopy_send_client": false, 00:12:46.056 "zerocopy_threshold": 0, 00:12:46.056 "tls_version": 0, 00:12:46.056 "enable_ktls": false 00:12:46.056 } 00:12:46.056 } 00:12:46.056 ] 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "subsystem": "vmd", 00:12:46.056 "config": [] 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "subsystem": "accel", 00:12:46.056 "config": [ 00:12:46.056 { 00:12:46.056 "method": "accel_set_options", 00:12:46.056 "params": { 00:12:46.056 "small_cache_size": 128, 00:12:46.056 "large_cache_size": 16, 00:12:46.056 "task_count": 2048, 00:12:46.056 "sequence_count": 2048, 00:12:46.056 "buf_count": 2048 00:12:46.056 } 00:12:46.056 } 00:12:46.056 ] 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "subsystem": "bdev", 00:12:46.056 "config": [ 00:12:46.056 { 00:12:46.056 "method": "bdev_set_options", 00:12:46.056 "params": { 00:12:46.056 "bdev_io_pool_size": 65535, 00:12:46.056 "bdev_io_cache_size": 256, 00:12:46.056 "bdev_auto_examine": true, 00:12:46.056 "iobuf_small_cache_size": 128, 00:12:46.056 "iobuf_large_cache_size": 16 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "bdev_raid_set_options", 00:12:46.056 "params": { 00:12:46.056 "process_window_size_kb": 1024, 00:12:46.056 "process_max_bandwidth_mb_sec": 0 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "bdev_iscsi_set_options", 00:12:46.056 "params": { 00:12:46.056 "timeout_sec": 30 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "bdev_nvme_set_options", 00:12:46.056 "params": { 00:12:46.056 "action_on_timeout": "none", 00:12:46.056 "timeout_us": 0, 00:12:46.056 "timeout_admin_us": 0, 00:12:46.056 "keep_alive_timeout_ms": 10000, 00:12:46.056 "arbitration_burst": 0, 00:12:46.056 "low_priority_weight": 0, 00:12:46.056 "medium_priority_weight": 0, 00:12:46.056 "high_priority_weight": 0, 00:12:46.056 "nvme_adminq_poll_period_us": 10000, 00:12:46.056 "nvme_ioq_poll_period_us": 0, 00:12:46.056 "io_queue_requests": 512, 00:12:46.056 "delay_cmd_submit": true, 00:12:46.056 "transport_retry_count": 4, 00:12:46.056 "bdev_retry_count": 3, 00:12:46.056 "transport_ack_timeout": 0, 00:12:46.056 "ctrlr_loss_timeout_sec": 0, 00:12:46.056 "reconnect_delay_sec": 0, 00:12:46.056 "fast_io_fail_timeout_sec": 0, 00:12:46.056 "disable_auto_failback": false, 00:12:46.056 "generate_uuids": false, 00:12:46.056 "transport_tos": 0, 00:12:46.056 "nvme_error_stat": false, 00:12:46.056 "rdma_srq_size": 0, 00:12:46.056 "io_path_stat": false, 00:12:46.056 "allow_accel_sequence": false, 00:12:46.056 "rdma_max_cq_size": 0, 00:12:46.056 "rdma_cm_event_timeout_ms": 0, 00:12:46.056 "dhchap_digests": [ 00:12:46.056 "sha256", 00:12:46.056 "sha384", 00:12:46.056 "sha512" 00:12:46.056 ], 00:12:46.056 "dhchap_dhgroups": [ 00:12:46.056 "null", 00:12:46.056 "ffdhe2048", 00:12:46.056 "ffdhe3072", 00:12:46.056 "ffdhe4096", 00:12:46.056 "ffdhe6144", 00:12:46.056 "ffdhe8192" 00:12:46.056 ] 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "bdev_nvme_attach_controller", 00:12:46.056 "params": { 00:12:46.056 "name": "nvme0", 00:12:46.056 "trtype": "TCP", 00:12:46.056 "adrfam": "IPv4", 00:12:46.056 "traddr": "10.0.0.3", 00:12:46.056 "trsvcid": "4420", 00:12:46.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.056 "prchk_reftag": false, 00:12:46.056 "prchk_guard": false, 00:12:46.056 "ctrlr_loss_timeout_sec": 0, 00:12:46.056 "reconnect_delay_sec": 0, 00:12:46.056 "fast_io_fail_timeout_sec": 0, 00:12:46.056 "psk": "key0", 00:12:46.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:46.056 "hdgst": false, 00:12:46.056 "ddgst": false, 00:12:46.056 "multipath": "multipath" 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "bdev_nvme_set_hotplug", 00:12:46.056 "params": { 00:12:46.056 "period_us": 100000, 00:12:46.056 "enable": false 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "bdev_enable_histogram", 00:12:46.056 "params": { 00:12:46.056 "name": "nvme0n1", 00:12:46.056 "enable": true 00:12:46.056 } 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "method": "bdev_wait_for_examine" 00:12:46.056 } 00:12:46.056 ] 00:12:46.056 }, 00:12:46.056 { 00:12:46.056 "subsystem": "nbd", 00:12:46.056 "config": [] 00:12:46.056 } 00:12:46.056 ] 00:12:46.056 }' 00:12:46.056 [2024-11-26 20:36:00.531186] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:46.056 [2024-11-26 20:36:00.531252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71479 ] 00:12:46.314 [2024-11-26 20:36:00.673620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.314 [2024-11-26 20:36:00.711081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.314 [2024-11-26 20:36:00.825611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:46.314 [2024-11-26 20:36:00.864445] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:46.888 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.888 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:46.888 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:12:46.888 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:47.149 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.149 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:47.408 Running I/O for 1 seconds... 00:12:48.342 6258.00 IOPS, 24.45 MiB/s 00:12:48.342 Latency(us) 00:12:48.342 [2024-11-26T20:36:02.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.342 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.342 Verification LBA range: start 0x0 length 0x2000 00:12:48.342 nvme0n1 : 1.01 6328.10 24.72 0.00 0.00 20118.07 2520.62 16232.76 00:12:48.342 [2024-11-26T20:36:02.897Z] =================================================================================================================== 00:12:48.342 [2024-11-26T20:36:02.897Z] Total : 6328.10 24.72 0.00 0.00 20118.07 2520.62 16232.76 00:12:48.342 { 00:12:48.342 "results": [ 00:12:48.342 { 00:12:48.342 "job": "nvme0n1", 00:12:48.342 "core_mask": "0x2", 00:12:48.342 "workload": "verify", 00:12:48.342 "status": "finished", 00:12:48.342 "verify_range": { 00:12:48.342 "start": 0, 00:12:48.342 "length": 8192 00:12:48.342 }, 00:12:48.342 "queue_depth": 128, 00:12:48.342 "io_size": 4096, 00:12:48.342 "runtime": 1.009307, 00:12:48.342 "iops": 6328.10433297302, 00:12:48.342 "mibps": 24.71915755067586, 00:12:48.342 "io_failed": 0, 00:12:48.342 "io_timeout": 0, 00:12:48.342 "avg_latency_us": 20118.074438703614, 00:12:48.342 "min_latency_us": 2520.6153846153848, 00:12:48.342 "max_latency_us": 16232.763076923076 00:12:48.342 } 00:12:48.342 ], 00:12:48.342 "core_count": 1 00:12:48.342 } 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:48.342 nvmf_trace.0 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 71479 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71479 ']' 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71479 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71479 00:12:48.342 killing process with pid 71479 00:12:48.342 Received shutdown signal, test time was about 1.000000 seconds 00:12:48.342 00:12:48.342 Latency(us) 00:12:48.342 [2024-11-26T20:36:02.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.342 [2024-11-26T20:36:02.897Z] =================================================================================================================== 00:12:48.342 [2024-11-26T20:36:02.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71479' 00:12:48.342 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71479 00:12:48.343 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71479 00:12:48.600 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:12:48.600 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.600 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:12:48.600 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.600 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:12:48.600 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.600 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.600 rmmod nvme_tcp 00:12:48.601 rmmod nvme_fabrics 00:12:48.601 rmmod nvme_keyring 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 71447 ']' 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 71447 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71447 ']' 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71447 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71447 00:12:48.601 killing process with pid 71447 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71447' 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71447 00:12:48.601 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71447 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.859 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.4bX1hglka6 /tmp/tmp.jdIyaxSojI /tmp/tmp.JJQPdU3TEO 00:12:49.118 00:12:49.118 real 1m21.788s 00:12:49.118 user 2m15.347s 00:12:49.118 sys 0m22.152s 00:12:49.118 ************************************ 00:12:49.118 END TEST nvmf_tls 00:12:49.118 ************************************ 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 ************************************ 00:12:49.118 START TEST nvmf_fips 00:12:49.118 ************************************ 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:49.118 * Looking for test storage... 00:12:49.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.118 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.119 --rc genhtml_branch_coverage=1 00:12:49.119 --rc genhtml_function_coverage=1 00:12:49.119 --rc genhtml_legend=1 00:12:49.119 --rc geninfo_all_blocks=1 00:12:49.119 --rc geninfo_unexecuted_blocks=1 00:12:49.119 00:12:49.119 ' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.119 --rc genhtml_branch_coverage=1 00:12:49.119 --rc genhtml_function_coverage=1 00:12:49.119 --rc genhtml_legend=1 00:12:49.119 --rc geninfo_all_blocks=1 00:12:49.119 --rc geninfo_unexecuted_blocks=1 00:12:49.119 00:12:49.119 ' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.119 --rc genhtml_branch_coverage=1 00:12:49.119 --rc genhtml_function_coverage=1 00:12:49.119 --rc genhtml_legend=1 00:12:49.119 --rc geninfo_all_blocks=1 00:12:49.119 --rc geninfo_unexecuted_blocks=1 00:12:49.119 00:12:49.119 ' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.119 --rc genhtml_branch_coverage=1 00:12:49.119 --rc genhtml_function_coverage=1 00:12:49.119 --rc genhtml_legend=1 00:12:49.119 --rc geninfo_all_blocks=1 00:12:49.119 --rc geninfo_unexecuted_blocks=1 00:12:49.119 00:12:49.119 ' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.119 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:12:49.119 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:12:49.425 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:12:49.426 Error setting digest 00:12:49.426 40E2F860FF7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:49.426 40E2F860FF7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:49.426 Cannot find device "nvmf_init_br" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:49.426 Cannot find device "nvmf_init_br2" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:49.426 Cannot find device "nvmf_tgt_br" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.426 Cannot find device "nvmf_tgt_br2" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:49.426 Cannot find device "nvmf_init_br" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:49.426 Cannot find device "nvmf_init_br2" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:49.426 Cannot find device "nvmf_tgt_br" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:49.426 Cannot find device "nvmf_tgt_br2" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:49.426 Cannot find device "nvmf_br" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:49.426 Cannot find device "nvmf_init_if" 00:12:49.426 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:49.427 Cannot find device "nvmf_init_if2" 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.427 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:49.686 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:49.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:49.686 00:12:49.686 --- 10.0.0.3 ping statistics --- 00:12:49.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.686 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:49.686 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:49.686 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:12:49.686 00:12:49.686 --- 10.0.0.4 ping statistics --- 00:12:49.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.686 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:49.686 00:12:49.686 --- 10.0.0.1 ping statistics --- 00:12:49.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.686 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:49.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:49.686 00:12:49.686 --- 10.0.0.2 ping statistics --- 00:12:49.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.686 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=71796 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 71796 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 71796 ']' 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.686 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.687 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.687 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:49.687 [2024-11-26 20:36:04.131468] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:49.687 [2024-11-26 20:36:04.131710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.945 [2024-11-26 20:36:04.274716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.945 [2024-11-26 20:36:04.311027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.945 [2024-11-26 20:36:04.311203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.945 [2024-11-26 20:36:04.311214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.945 [2024-11-26 20:36:04.311219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.945 [2024-11-26 20:36:04.311223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.945 [2024-11-26 20:36:04.311490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.945 [2024-11-26 20:36:04.344208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:50.510 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.510 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:12:50.510 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.510 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.510 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Gwi 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Gwi 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Gwi 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Gwi 00:12:50.510 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.769 [2024-11-26 20:36:05.203057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.769 [2024-11-26 20:36:05.219005] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:50.769 [2024-11-26 20:36:05.219196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:50.769 malloc0 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=71832 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 71832 /var/tmp/bdevperf.sock 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 71832 ']' 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:50.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.769 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:51.028 [2024-11-26 20:36:05.330345] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:12:51.028 [2024-11-26 20:36:05.330552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71832 ] 00:12:51.028 [2024-11-26 20:36:05.467969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.028 [2024-11-26 20:36:05.512292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.028 [2024-11-26 20:36:05.546962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:51.966 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.966 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:12:51.966 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Gwi 00:12:51.966 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:52.225 [2024-11-26 20:36:06.597361] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:52.225 TLSTESTn1 00:12:52.225 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:52.225 Running I/O for 10 seconds... 00:12:54.244 5885.00 IOPS, 22.99 MiB/s [2024-11-26T20:36:10.171Z] 6198.00 IOPS, 24.21 MiB/s [2024-11-26T20:36:11.105Z] 6425.33 IOPS, 25.10 MiB/s [2024-11-26T20:36:12.038Z] 6415.75 IOPS, 25.06 MiB/s [2024-11-26T20:36:12.971Z] 6347.00 IOPS, 24.79 MiB/s [2024-11-26T20:36:13.903Z] 6295.83 IOPS, 24.59 MiB/s [2024-11-26T20:36:14.837Z] 6259.14 IOPS, 24.45 MiB/s [2024-11-26T20:36:15.776Z] 6300.00 IOPS, 24.61 MiB/s [2024-11-26T20:36:17.151Z] 6361.22 IOPS, 24.85 MiB/s [2024-11-26T20:36:17.151Z] 6412.80 IOPS, 25.05 MiB/s 00:13:02.596 Latency(us) 00:13:02.596 [2024-11-26T20:36:17.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.596 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:02.596 Verification LBA range: start 0x0 length 0x2000 00:13:02.596 TLSTESTn1 : 10.02 6412.39 25.05 0.00 0.00 19922.70 6049.48 20568.22 00:13:02.596 [2024-11-26T20:36:17.151Z] =================================================================================================================== 00:13:02.596 [2024-11-26T20:36:17.151Z] Total : 6412.39 25.05 0.00 0.00 19922.70 6049.48 20568.22 00:13:02.596 { 00:13:02.596 "results": [ 00:13:02.596 { 00:13:02.596 "job": "TLSTESTn1", 00:13:02.596 "core_mask": "0x4", 00:13:02.596 "workload": "verify", 00:13:02.596 "status": "finished", 00:13:02.596 "verify_range": { 00:13:02.596 "start": 0, 00:13:02.596 "length": 8192 00:13:02.596 }, 00:13:02.596 "queue_depth": 128, 00:13:02.596 "io_size": 4096, 00:13:02.596 "runtime": 10.020444, 00:13:02.596 "iops": 6412.3905088437195, 00:13:02.596 "mibps": 25.04840042517078, 00:13:02.596 "io_failed": 0, 00:13:02.596 "io_timeout": 0, 00:13:02.596 "avg_latency_us": 19922.698234606105, 00:13:02.596 "min_latency_us": 6049.476923076923, 00:13:02.596 "max_latency_us": 20568.221538461537 00:13:02.596 } 00:13:02.596 ], 00:13:02.596 "core_count": 1 00:13:02.596 } 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:02.596 nvmf_trace.0 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 71832 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 71832 ']' 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 71832 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71832 00:13:02.596 killing process with pid 71832 00:13:02.596 Received shutdown signal, test time was about 10.000000 seconds 00:13:02.596 00:13:02.596 Latency(us) 00:13:02.596 [2024-11-26T20:36:17.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.596 [2024-11-26T20:36:17.151Z] =================================================================================================================== 00:13:02.596 [2024-11-26T20:36:17.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71832' 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 71832 00:13:02.596 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 71832 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.596 rmmod nvme_tcp 00:13:02.596 rmmod nvme_fabrics 00:13:02.596 rmmod nvme_keyring 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 71796 ']' 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 71796 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 71796 ']' 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 71796 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.596 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71796 00:13:02.855 killing process with pid 71796 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71796' 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 71796 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 71796 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:02.855 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:13:03.112 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Gwi 00:13:03.112 ************************************ 00:13:03.112 END TEST nvmf_fips 00:13:03.113 ************************************ 00:13:03.113 00:13:03.113 real 0m14.043s 00:13:03.113 user 0m20.374s 00:13:03.113 sys 0m4.672s 00:13:03.113 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.113 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 20:36:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:03.113 20:36:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:03.113 20:36:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.113 20:36:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 ************************************ 00:13:03.113 START TEST nvmf_control_msg_list 00:13:03.113 ************************************ 00:13:03.113 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:03.371 * Looking for test storage... 00:13:03.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.371 --rc genhtml_branch_coverage=1 00:13:03.371 --rc genhtml_function_coverage=1 00:13:03.371 --rc genhtml_legend=1 00:13:03.371 --rc geninfo_all_blocks=1 00:13:03.371 --rc geninfo_unexecuted_blocks=1 00:13:03.371 00:13:03.371 ' 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.371 --rc genhtml_branch_coverage=1 00:13:03.371 --rc genhtml_function_coverage=1 00:13:03.371 --rc genhtml_legend=1 00:13:03.371 --rc geninfo_all_blocks=1 00:13:03.371 --rc geninfo_unexecuted_blocks=1 00:13:03.371 00:13:03.371 ' 00:13:03.371 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.372 --rc genhtml_branch_coverage=1 00:13:03.372 --rc genhtml_function_coverage=1 00:13:03.372 --rc genhtml_legend=1 00:13:03.372 --rc geninfo_all_blocks=1 00:13:03.372 --rc geninfo_unexecuted_blocks=1 00:13:03.372 00:13:03.372 ' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:03.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.372 --rc genhtml_branch_coverage=1 00:13:03.372 --rc genhtml_function_coverage=1 00:13:03.372 --rc genhtml_legend=1 00:13:03.372 --rc geninfo_all_blocks=1 00:13:03.372 --rc geninfo_unexecuted_blocks=1 00:13:03.372 00:13:03.372 ' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:03.372 Cannot find device "nvmf_init_br" 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:03.372 Cannot find device "nvmf_init_br2" 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:03.372 Cannot find device "nvmf_tgt_br" 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:03.372 Cannot find device "nvmf_tgt_br2" 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:03.372 Cannot find device "nvmf_init_br" 00:13:03.372 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:03.373 Cannot find device "nvmf_init_br2" 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:03.373 Cannot find device "nvmf_tgt_br" 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:03.373 Cannot find device "nvmf_tgt_br2" 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:03.373 Cannot find device "nvmf_br" 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:03.373 Cannot find device "nvmf_init_if" 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:03.373 Cannot find device "nvmf_init_if2" 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:03.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:03.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:03.373 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:03.688 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:03.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:03.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:13:03.688 00:13:03.688 --- 10.0.0.3 ping statistics --- 00:13:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.688 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:03.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:03.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:13:03.688 00:13:03.688 --- 10.0.0.4 ping statistics --- 00:13:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.688 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:03.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 00:13:03.688 00:13:03.688 --- 10.0.0.1 ping statistics --- 00:13:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.688 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 00:13:03.688 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:03.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:03.688 00:13:03.688 --- 10.0.0.2 ping statistics --- 00:13:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.688 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72221 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72221 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72221 ']' 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.689 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:03.689 [2024-11-26 20:36:18.130252] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:03.689 [2024-11-26 20:36:18.130440] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.947 [2024-11-26 20:36:18.275803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.947 [2024-11-26 20:36:18.329963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.947 [2024-11-26 20:36:18.330023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.947 [2024-11-26 20:36:18.330033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.947 [2024-11-26 20:36:18.330041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.948 [2024-11-26 20:36:18.330049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.948 [2024-11-26 20:36:18.330442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.948 [2024-11-26 20:36:18.367378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:04.513 [2024-11-26 20:36:19.046972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.513 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:04.784 Malloc0 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:04.784 [2024-11-26 20:36:19.082509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72253 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72254 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72255 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72253 00:13:04.784 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:04.784 [2024-11-26 20:36:19.260768] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:04.784 [2024-11-26 20:36:19.271151] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:04.784 [2024-11-26 20:36:19.271472] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:06.161 Initializing NVMe Controllers 00:13:06.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:06.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:13:06.161 Initialization complete. Launching workers. 00:13:06.161 ======================================================== 00:13:06.161 Latency(us) 00:13:06.161 Device Information : IOPS MiB/s Average min max 00:13:06.161 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4416.00 17.25 226.16 98.07 417.75 00:13:06.161 ======================================================== 00:13:06.161 Total : 4416.00 17.25 226.16 98.07 417.75 00:13:06.161 00:13:06.161 Initializing NVMe Controllers 00:13:06.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:06.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:13:06.161 Initialization complete. Launching workers. 00:13:06.161 ======================================================== 00:13:06.161 Latency(us) 00:13:06.161 Device Information : IOPS MiB/s Average min max 00:13:06.161 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4403.96 17.20 226.75 149.70 391.80 00:13:06.161 ======================================================== 00:13:06.161 Total : 4403.96 17.20 226.75 149.70 391.80 00:13:06.161 00:13:06.161 Initializing NVMe Controllers 00:13:06.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:06.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:13:06.161 Initialization complete. Launching workers. 00:13:06.161 ======================================================== 00:13:06.161 Latency(us) 00:13:06.161 Device Information : IOPS MiB/s Average min max 00:13:06.161 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4409.00 17.22 226.50 106.31 387.55 00:13:06.161 ======================================================== 00:13:06.161 Total : 4409.00 17.22 226.50 106.31 387.55 00:13:06.161 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72254 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72255 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.161 rmmod nvme_tcp 00:13:06.161 rmmod nvme_fabrics 00:13:06.161 rmmod nvme_keyring 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72221 ']' 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72221 00:13:06.161 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72221 ']' 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72221 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72221 00:13:06.162 killing process with pid 72221 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72221' 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72221 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72221 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:06.162 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.419 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.419 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:06.419 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.419 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.419 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.419 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:13:06.419 00:13:06.419 real 0m3.170s 00:13:06.419 user 0m5.517s 00:13:06.419 sys 0m1.000s 00:13:06.420 ************************************ 00:13:06.420 END TEST nvmf_control_msg_list 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:06.420 ************************************ 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.420 ************************************ 00:13:06.420 START TEST nvmf_wait_for_buf 00:13:06.420 ************************************ 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:06.420 * Looking for test storage... 00:13:06.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.420 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:06.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.678 --rc genhtml_branch_coverage=1 00:13:06.678 --rc genhtml_function_coverage=1 00:13:06.678 --rc genhtml_legend=1 00:13:06.678 --rc geninfo_all_blocks=1 00:13:06.678 --rc geninfo_unexecuted_blocks=1 00:13:06.678 00:13:06.678 ' 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:06.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.678 --rc genhtml_branch_coverage=1 00:13:06.678 --rc genhtml_function_coverage=1 00:13:06.678 --rc genhtml_legend=1 00:13:06.678 --rc geninfo_all_blocks=1 00:13:06.678 --rc geninfo_unexecuted_blocks=1 00:13:06.678 00:13:06.678 ' 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:06.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.678 --rc genhtml_branch_coverage=1 00:13:06.678 --rc genhtml_function_coverage=1 00:13:06.678 --rc genhtml_legend=1 00:13:06.678 --rc geninfo_all_blocks=1 00:13:06.678 --rc geninfo_unexecuted_blocks=1 00:13:06.678 00:13:06.678 ' 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:06.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.678 --rc genhtml_branch_coverage=1 00:13:06.678 --rc genhtml_function_coverage=1 00:13:06.678 --rc genhtml_legend=1 00:13:06.678 --rc geninfo_all_blocks=1 00:13:06.678 --rc geninfo_unexecuted_blocks=1 00:13:06.678 00:13:06.678 ' 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.678 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.679 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:06.679 Cannot find device "nvmf_init_br" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:06.679 Cannot find device "nvmf_init_br2" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:06.679 Cannot find device "nvmf_tgt_br" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.679 Cannot find device "nvmf_tgt_br2" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:06.679 Cannot find device "nvmf_init_br" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:06.679 Cannot find device "nvmf_init_br2" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:06.679 Cannot find device "nvmf_tgt_br" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:06.679 Cannot find device "nvmf_tgt_br2" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:06.679 Cannot find device "nvmf_br" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:06.679 Cannot find device "nvmf_init_if" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:06.679 Cannot find device "nvmf_init_if2" 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:06.679 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:06.680 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:06.680 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:06.680 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:06.680 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:06.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:06.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:06.938 00:13:06.938 --- 10.0.0.3 ping statistics --- 00:13:06.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.938 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:06.938 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:06.938 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:13:06.938 00:13:06.938 --- 10.0.0.4 ping statistics --- 00:13:06.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.938 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:06.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:06.938 00:13:06.938 --- 10.0.0.1 ping statistics --- 00:13:06.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.938 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:06.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:06.938 00:13:06.938 --- 10.0.0.2 ping statistics --- 00:13:06.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.938 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=72486 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 72486 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 72486 ']' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.938 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:06.938 [2024-11-26 20:36:21.373024] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:06.938 [2024-11-26 20:36:21.373222] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.247 [2024-11-26 20:36:21.514899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.247 [2024-11-26 20:36:21.552052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.247 [2024-11-26 20:36:21.552233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.247 [2024-11-26 20:36:21.552432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.247 [2024-11-26 20:36:21.552562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.247 [2024-11-26 20:36:21.552581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.247 [2024-11-26 20:36:21.552932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:13:07.810 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:07.811 [2024-11-26 20:36:22.313695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:07.811 Malloc0 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.811 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:07.811 [2024-11-26 20:36:22.359726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:08.067 [2024-11-26 20:36:22.383777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.067 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:08.067 [2024-11-26 20:36:22.579673] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:09.437 Initializing NVMe Controllers 00:13:09.437 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:09.437 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:13:09.437 Initialization complete. Launching workers. 00:13:09.437 ======================================================== 00:13:09.437 Latency(us) 00:13:09.437 Device Information : IOPS MiB/s Average min max 00:13:09.437 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 8000.17 5020.09 10979.99 00:13:09.437 ======================================================== 00:13:09.437 Total : 504.00 63.00 8000.17 5020.09 10979.99 00:13:09.437 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.437 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.437 rmmod nvme_tcp 00:13:09.437 rmmod nvme_fabrics 00:13:09.437 rmmod nvme_keyring 00:13:09.727 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 72486 ']' 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 72486 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 72486 ']' 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 72486 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72486 00:13:09.727 killing process with pid 72486 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72486' 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 72486 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 72486 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:09.727 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:13:09.993 ************************************ 00:13:09.993 END TEST nvmf_wait_for_buf 00:13:09.993 ************************************ 00:13:09.993 00:13:09.993 real 0m3.554s 00:13:09.993 user 0m3.101s 00:13:09.993 sys 0m0.618s 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.993 ************************************ 00:13:09.993 START TEST nvmf_nsid 00:13:09.993 ************************************ 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:09.993 * Looking for test storage... 00:13:09.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.993 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:10.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.253 --rc genhtml_branch_coverage=1 00:13:10.253 --rc genhtml_function_coverage=1 00:13:10.253 --rc genhtml_legend=1 00:13:10.253 --rc geninfo_all_blocks=1 00:13:10.253 --rc geninfo_unexecuted_blocks=1 00:13:10.253 00:13:10.253 ' 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:10.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.253 --rc genhtml_branch_coverage=1 00:13:10.253 --rc genhtml_function_coverage=1 00:13:10.253 --rc genhtml_legend=1 00:13:10.253 --rc geninfo_all_blocks=1 00:13:10.253 --rc geninfo_unexecuted_blocks=1 00:13:10.253 00:13:10.253 ' 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:10.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.253 --rc genhtml_branch_coverage=1 00:13:10.253 --rc genhtml_function_coverage=1 00:13:10.253 --rc genhtml_legend=1 00:13:10.253 --rc geninfo_all_blocks=1 00:13:10.253 --rc geninfo_unexecuted_blocks=1 00:13:10.253 00:13:10.253 ' 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:10.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.253 --rc genhtml_branch_coverage=1 00:13:10.253 --rc genhtml_function_coverage=1 00:13:10.253 --rc genhtml_legend=1 00:13:10.253 --rc geninfo_all_blocks=1 00:13:10.253 --rc geninfo_unexecuted_blocks=1 00:13:10.253 00:13:10.253 ' 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:10.253 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:10.254 Cannot find device "nvmf_init_br" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:10.254 Cannot find device "nvmf_init_br2" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:10.254 Cannot find device "nvmf_tgt_br" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.254 Cannot find device "nvmf_tgt_br2" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:10.254 Cannot find device "nvmf_init_br" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:10.254 Cannot find device "nvmf_init_br2" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:10.254 Cannot find device "nvmf_tgt_br" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:10.254 Cannot find device "nvmf_tgt_br2" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:10.254 Cannot find device "nvmf_br" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:10.254 Cannot find device "nvmf_init_if" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:10.254 Cannot find device "nvmf_init_if2" 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:10.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:10.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:10.254 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:10.255 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:10.255 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:10.255 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:10.513 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:10.513 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:10.513 00:13:10.513 --- 10.0.0.3 ping statistics --- 00:13:10.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.513 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:10.513 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:10.513 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:13:10.513 00:13:10.513 --- 10.0.0.4 ping statistics --- 00:13:10.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.513 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:10.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:10.513 00:13:10.513 --- 10.0.0.1 ping statistics --- 00:13:10.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.513 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:10.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:13:10.513 00:13:10.513 --- 10.0.0.2 ping statistics --- 00:13:10.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.513 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=72749 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 72749 00:13:10.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 72749 ']' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:13:10.513 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:10.513 [2024-11-26 20:36:25.014036] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:10.513 [2024-11-26 20:36:25.014093] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.771 [2024-11-26 20:36:25.152202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.771 [2024-11-26 20:36:25.188259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.771 [2024-11-26 20:36:25.188299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.771 [2024-11-26 20:36:25.188307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.771 [2024-11-26 20:36:25.188313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.771 [2024-11-26 20:36:25.188319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.771 [2024-11-26 20:36:25.188578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.771 [2024-11-26 20:36:25.220628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=72781 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=28016110-272d-4293-a146-e98dd42ed81d 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b42a0a18-61d9-41c1-80ab-c03b0c72ee24 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d726ae2c-53ba-4461-a5a6-2a2e16f05f41 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.706 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:11.706 null0 00:13:11.706 null1 00:13:11.706 [2024-11-26 20:36:25.991373] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:11.706 [2024-11-26 20:36:25.991434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72781 ] 00:13:11.706 null2 00:13:11.706 [2024-11-26 20:36:25.998122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.706 [2024-11-26 20:36:26.022213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 72781 /var/tmp/tgt2.sock 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 72781 ']' 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.706 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:11.706 [2024-11-26 20:36:26.131371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.706 [2024-11-26 20:36:26.167229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.706 [2024-11-26 20:36:26.213258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:11.964 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.964 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:11.964 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:13:12.222 [2024-11-26 20:36:26.705687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.222 [2024-11-26 20:36:26.721785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:13:12.222 nvme0n1 nvme0n2 00:13:12.222 nvme1n1 00:13:12.222 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:13:12.222 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:13:12.222 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:13:12.557 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 28016110-272d-4293-a146-e98dd42ed81d 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=28016110272d4293a146e98dd42ed81d 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 28016110272D4293A146E98DD42ED81D 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 28016110272D4293A146E98DD42ED81D == \2\8\0\1\6\1\1\0\2\7\2\D\4\2\9\3\A\1\4\6\E\9\8\D\D\4\2\E\D\8\1\D ]] 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b42a0a18-61d9-41c1-80ab-c03b0c72ee24 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:13:13.495 20:36:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:13.495 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b42a0a1861d941c180abc03b0c72ee24 00:13:13.495 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B42A0A1861D941C180ABC03B0C72EE24 00:13:13.495 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B42A0A1861D941C180ABC03B0C72EE24 == \B\4\2\A\0\A\1\8\6\1\D\9\4\1\C\1\8\0\A\B\C\0\3\B\0\C\7\2\E\E\2\4 ]] 00:13:13.495 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:13:13.495 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:13.495 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:13:13.496 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:13.496 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:13:13.496 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d726ae2c-53ba-4461-a5a6-2a2e16f05f41 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:13.752 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d726ae2c53ba4461a5a62a2e16f05f41 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D726AE2C53BA4461A5A62A2E16F05F41 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D726AE2C53BA4461A5A62A2E16F05F41 == \D\7\2\6\A\E\2\C\5\3\B\A\4\4\6\1\A\5\A\6\2\A\2\E\1\6\F\0\5\F\4\1 ]] 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 72781 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 72781 ']' 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 72781 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72781 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:13.753 killing process with pid 72781 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72781' 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 72781 00:13:13.753 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 72781 00:13:14.011 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:13:14.011 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:14.011 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:13:14.011 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:14.011 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:13:14.011 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:14.011 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:14.011 rmmod nvme_tcp 00:13:14.011 rmmod nvme_fabrics 00:13:14.011 rmmod nvme_keyring 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 72749 ']' 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 72749 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 72749 ']' 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 72749 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72749 00:13:14.269 killing process with pid 72749 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72749' 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 72749 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 72749 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:14.269 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:13:14.529 00:13:14.529 real 0m4.532s 00:13:14.529 user 0m6.491s 00:13:14.529 sys 0m1.311s 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.529 ************************************ 00:13:14.529 END TEST nvmf_nsid 00:13:14.529 ************************************ 00:13:14.529 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:14.529 20:36:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:14.529 ************************************ 00:13:14.529 END TEST nvmf_target_extra 00:13:14.529 ************************************ 00:13:14.529 00:13:14.529 real 4m22.767s 00:13:14.529 user 8m59.028s 00:13:14.529 sys 0m50.954s 00:13:14.529 20:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.529 20:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.529 20:36:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:14.529 20:36:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.529 20:36:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.529 20:36:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.529 ************************************ 00:13:14.529 START TEST nvmf_host 00:13:14.529 ************************************ 00:13:14.529 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:14.787 * Looking for test storage... 00:13:14.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:14.787 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.787 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.787 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.787 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.787 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.787 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.788 --rc genhtml_branch_coverage=1 00:13:14.788 --rc genhtml_function_coverage=1 00:13:14.788 --rc genhtml_legend=1 00:13:14.788 --rc geninfo_all_blocks=1 00:13:14.788 --rc geninfo_unexecuted_blocks=1 00:13:14.788 00:13:14.788 ' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.788 --rc genhtml_branch_coverage=1 00:13:14.788 --rc genhtml_function_coverage=1 00:13:14.788 --rc genhtml_legend=1 00:13:14.788 --rc geninfo_all_blocks=1 00:13:14.788 --rc geninfo_unexecuted_blocks=1 00:13:14.788 00:13:14.788 ' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.788 --rc genhtml_branch_coverage=1 00:13:14.788 --rc genhtml_function_coverage=1 00:13:14.788 --rc genhtml_legend=1 00:13:14.788 --rc geninfo_all_blocks=1 00:13:14.788 --rc geninfo_unexecuted_blocks=1 00:13:14.788 00:13:14.788 ' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.788 --rc genhtml_branch_coverage=1 00:13:14.788 --rc genhtml_function_coverage=1 00:13:14.788 --rc genhtml_legend=1 00:13:14.788 --rc geninfo_all_blocks=1 00:13:14.788 --rc geninfo_unexecuted_blocks=1 00:13:14.788 00:13:14.788 ' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.788 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:14.788 ************************************ 00:13:14.788 START TEST nvmf_identify 00:13:14.788 ************************************ 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:14.788 * Looking for test storage... 00:13:14.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.788 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.047 --rc genhtml_branch_coverage=1 00:13:15.047 --rc genhtml_function_coverage=1 00:13:15.047 --rc genhtml_legend=1 00:13:15.047 --rc geninfo_all_blocks=1 00:13:15.047 --rc geninfo_unexecuted_blocks=1 00:13:15.047 00:13:15.047 ' 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.047 --rc genhtml_branch_coverage=1 00:13:15.047 --rc genhtml_function_coverage=1 00:13:15.047 --rc genhtml_legend=1 00:13:15.047 --rc geninfo_all_blocks=1 00:13:15.047 --rc geninfo_unexecuted_blocks=1 00:13:15.047 00:13:15.047 ' 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.047 --rc genhtml_branch_coverage=1 00:13:15.047 --rc genhtml_function_coverage=1 00:13:15.047 --rc genhtml_legend=1 00:13:15.047 --rc geninfo_all_blocks=1 00:13:15.047 --rc geninfo_unexecuted_blocks=1 00:13:15.047 00:13:15.047 ' 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.047 --rc genhtml_branch_coverage=1 00:13:15.047 --rc genhtml_function_coverage=1 00:13:15.047 --rc genhtml_legend=1 00:13:15.047 --rc geninfo_all_blocks=1 00:13:15.047 --rc geninfo_unexecuted_blocks=1 00:13:15.047 00:13:15.047 ' 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.047 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.048 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:15.048 Cannot find device "nvmf_init_br" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:15.048 Cannot find device "nvmf_init_br2" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:15.048 Cannot find device "nvmf_tgt_br" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.048 Cannot find device "nvmf_tgt_br2" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:15.048 Cannot find device "nvmf_init_br" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:15.048 Cannot find device "nvmf_init_br2" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:15.048 Cannot find device "nvmf_tgt_br" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:15.048 Cannot find device "nvmf_tgt_br2" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:15.048 Cannot find device "nvmf_br" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:15.048 Cannot find device "nvmf_init_if" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:15.048 Cannot find device "nvmf_init_if2" 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:15.048 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:15.306 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:15.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:15.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:15.307 00:13:15.307 --- 10.0.0.3 ping statistics --- 00:13:15.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.307 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:15.307 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:15.307 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:13:15.307 00:13:15.307 --- 10.0.0.4 ping statistics --- 00:13:15.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.307 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:15.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:15.307 00:13:15.307 --- 10.0.0.1 ping statistics --- 00:13:15.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.307 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:15.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:15.307 00:13:15.307 --- 10.0.0.2 ping statistics --- 00:13:15.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.307 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73133 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73133 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73133 ']' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.307 20:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:15.564 [2024-11-26 20:36:29.861706] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:15.564 [2024-11-26 20:36:29.861803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.564 [2024-11-26 20:36:30.011522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.564 [2024-11-26 20:36:30.051453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.564 [2024-11-26 20:36:30.051675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.564 [2024-11-26 20:36:30.051740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.564 [2024-11-26 20:36:30.051768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.565 [2024-11-26 20:36:30.051821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.565 [2024-11-26 20:36:30.052687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.565 [2024-11-26 20:36:30.053260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.565 [2024-11-26 20:36:30.053540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.565 [2024-11-26 20:36:30.053545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.565 [2024-11-26 20:36:30.087794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.497 [2024-11-26 20:36:30.748241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.497 Malloc0 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.497 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.498 [2024-11-26 20:36:30.859750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:16.498 [ 00:13:16.498 { 00:13:16.498 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:16.498 "subtype": "Discovery", 00:13:16.498 "listen_addresses": [ 00:13:16.498 { 00:13:16.498 "trtype": "TCP", 00:13:16.498 "adrfam": "IPv4", 00:13:16.498 "traddr": "10.0.0.3", 00:13:16.498 "trsvcid": "4420" 00:13:16.498 } 00:13:16.498 ], 00:13:16.498 "allow_any_host": true, 00:13:16.498 "hosts": [] 00:13:16.498 }, 00:13:16.498 { 00:13:16.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.498 "subtype": "NVMe", 00:13:16.498 "listen_addresses": [ 00:13:16.498 { 00:13:16.498 "trtype": "TCP", 00:13:16.498 "adrfam": "IPv4", 00:13:16.498 "traddr": "10.0.0.3", 00:13:16.498 "trsvcid": "4420" 00:13:16.498 } 00:13:16.498 ], 00:13:16.498 "allow_any_host": true, 00:13:16.498 "hosts": [], 00:13:16.498 "serial_number": "SPDK00000000000001", 00:13:16.498 "model_number": "SPDK bdev Controller", 00:13:16.498 "max_namespaces": 32, 00:13:16.498 "min_cntlid": 1, 00:13:16.498 "max_cntlid": 65519, 00:13:16.498 "namespaces": [ 00:13:16.498 { 00:13:16.498 "nsid": 1, 00:13:16.498 "bdev_name": "Malloc0", 00:13:16.498 "name": "Malloc0", 00:13:16.498 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:16.498 "eui64": "ABCDEF0123456789", 00:13:16.498 "uuid": "9108cbe1-d01b-41c9-8e49-e2684672bc6b" 00:13:16.498 } 00:13:16.498 ] 00:13:16.498 } 00:13:16.498 ] 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.498 20:36:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:16.498 [2024-11-26 20:36:30.904364] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:16.498 [2024-11-26 20:36:30.904404] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73168 ] 00:13:16.770 [2024-11-26 20:36:31.058426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:13:16.770 [2024-11-26 20:36:31.058498] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:16.770 [2024-11-26 20:36:31.058502] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:16.770 [2024-11-26 20:36:31.058517] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:16.770 [2024-11-26 20:36:31.058527] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:16.770 [2024-11-26 20:36:31.058806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:13:16.770 [2024-11-26 20:36:31.058847] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfc3750 0 00:13:16.770 [2024-11-26 20:36:31.065619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:16.770 [2024-11-26 20:36:31.065650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:16.770 [2024-11-26 20:36:31.065656] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:16.770 [2024-11-26 20:36:31.065661] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:16.770 [2024-11-26 20:36:31.065700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.770 [2024-11-26 20:36:31.065706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.770 [2024-11-26 20:36:31.065710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.770 [2024-11-26 20:36:31.065724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:16.770 [2024-11-26 20:36:31.065757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.770 [2024-11-26 20:36:31.073610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.770 [2024-11-26 20:36:31.073633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.771 [2024-11-26 20:36:31.073637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.771 [2024-11-26 20:36:31.073650] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:16.771 [2024-11-26 20:36:31.073658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:13:16.771 [2024-11-26 20:36:31.073663] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:13:16.771 [2024-11-26 20:36:31.073682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.771 [2024-11-26 20:36:31.073698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.771 [2024-11-26 20:36:31.073730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.771 [2024-11-26 20:36:31.073781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.771 [2024-11-26 20:36:31.073786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.771 [2024-11-26 20:36:31.073788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.771 [2024-11-26 20:36:31.073796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:13:16.771 [2024-11-26 20:36:31.073801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:13:16.771 [2024-11-26 20:36:31.073806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.771 [2024-11-26 20:36:31.073816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.771 [2024-11-26 20:36:31.073827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.771 [2024-11-26 20:36:31.073865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.771 [2024-11-26 20:36:31.073870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.771 [2024-11-26 20:36:31.073872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.771 [2024-11-26 20:36:31.073879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:13:16.771 [2024-11-26 20:36:31.073884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:16.771 [2024-11-26 20:36:31.073889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.771 [2024-11-26 20:36:31.073900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.771 [2024-11-26 20:36:31.073910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.771 [2024-11-26 20:36:31.073953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.771 [2024-11-26 20:36:31.073958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.771 [2024-11-26 20:36:31.073961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.771 [2024-11-26 20:36:31.073967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:16.771 [2024-11-26 20:36:31.073974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.073979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.771 [2024-11-26 20:36:31.073984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.771 [2024-11-26 20:36:31.073995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.771 [2024-11-26 20:36:31.074043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.771 [2024-11-26 20:36:31.074048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.771 [2024-11-26 20:36:31.074050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.074053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.771 [2024-11-26 20:36:31.074065] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:16.771 [2024-11-26 20:36:31.074071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:16.771 [2024-11-26 20:36:31.074080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:16.771 [2024-11-26 20:36:31.074185] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:13:16.771 [2024-11-26 20:36:31.074189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:16.771 [2024-11-26 20:36:31.074196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.074198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.074201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.771 [2024-11-26 20:36:31.074206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.771 [2024-11-26 20:36:31.074220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.771 [2024-11-26 20:36:31.074258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.771 [2024-11-26 20:36:31.074262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.771 [2024-11-26 20:36:31.074264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.074267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.771 [2024-11-26 20:36:31.074271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:16.771 [2024-11-26 20:36:31.074277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.074280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.771 [2024-11-26 20:36:31.074282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.771 [2024-11-26 20:36:31.074288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.771 [2024-11-26 20:36:31.074297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.771 [2024-11-26 20:36:31.074331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.771 [2024-11-26 20:36:31.074335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.772 [2024-11-26 20:36:31.074338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.772 [2024-11-26 20:36:31.074344] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:16.772 [2024-11-26 20:36:31.074347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:16.772 [2024-11-26 20:36:31.074352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:13:16.772 [2024-11-26 20:36:31.074360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:16.772 [2024-11-26 20:36:31.074369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.772 [2024-11-26 20:36:31.074377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.772 [2024-11-26 20:36:31.074387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.772 [2024-11-26 20:36:31.074457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.772 [2024-11-26 20:36:31.074462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.772 [2024-11-26 20:36:31.074464] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074467] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc3750): datao=0, datal=4096, cccid=0 00:13:16.772 [2024-11-26 20:36:31.074471] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1027740) on tqpair(0xfc3750): expected_datao=0, payload_size=4096 00:13:16.772 [2024-11-26 20:36:31.074474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074481] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074484] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.772 [2024-11-26 20:36:31.074495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.772 [2024-11-26 20:36:31.074497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.772 [2024-11-26 20:36:31.074507] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:13:16.772 [2024-11-26 20:36:31.074510] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:13:16.772 [2024-11-26 20:36:31.074513] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:13:16.772 [2024-11-26 20:36:31.074519] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:13:16.772 [2024-11-26 20:36:31.074523] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:13:16.772 [2024-11-26 20:36:31.074526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:13:16.772 [2024-11-26 20:36:31.074532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:16.772 [2024-11-26 20:36:31.074537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.772 [2024-11-26 20:36:31.074548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:16.772 [2024-11-26 20:36:31.074559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.772 [2024-11-26 20:36:31.074613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.772 [2024-11-26 20:36:31.074619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.772 [2024-11-26 20:36:31.074621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.772 [2024-11-26 20:36:31.074630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfc3750) 00:13:16.772 [2024-11-26 20:36:31.074640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.772 [2024-11-26 20:36:31.074646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfc3750) 00:13:16.772 [2024-11-26 20:36:31.074656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.772 [2024-11-26 20:36:31.074660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfc3750) 00:13:16.772 [2024-11-26 20:36:31.074670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.772 [2024-11-26 20:36:31.074674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.772 [2024-11-26 20:36:31.074684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.772 [2024-11-26 20:36:31.074687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:16.772 [2024-11-26 20:36:31.074692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:16.772 [2024-11-26 20:36:31.074698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.772 [2024-11-26 20:36:31.074700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc3750) 00:13:16.772 [2024-11-26 20:36:31.074706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.772 [2024-11-26 20:36:31.074722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027740, cid 0, qid 0 00:13:16.772 [2024-11-26 20:36:31.074727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10278c0, cid 1, qid 0 00:13:16.772 [2024-11-26 20:36:31.074730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027a40, cid 2, qid 0 00:13:16.772 [2024-11-26 20:36:31.074734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.772 [2024-11-26 20:36:31.074737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027d40, cid 4, qid 0 00:13:16.772 [2024-11-26 20:36:31.074823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.772 [2024-11-26 20:36:31.074828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.773 [2024-11-26 20:36:31.074831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027d40) on tqpair=0xfc3750 00:13:16.773 [2024-11-26 20:36:31.074838] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:13:16.773 [2024-11-26 20:36:31.074841] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:13:16.773 [2024-11-26 20:36:31.074848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc3750) 00:13:16.773 [2024-11-26 20:36:31.074856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.773 [2024-11-26 20:36:31.074866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027d40, cid 4, qid 0 00:13:16.773 [2024-11-26 20:36:31.074915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.773 [2024-11-26 20:36:31.074925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.773 [2024-11-26 20:36:31.074928] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074930] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc3750): datao=0, datal=4096, cccid=4 00:13:16.773 [2024-11-26 20:36:31.074933] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1027d40) on tqpair(0xfc3750): expected_datao=0, payload_size=4096 00:13:16.773 [2024-11-26 20:36:31.074936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074941] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074944] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.773 [2024-11-26 20:36:31.074955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.773 [2024-11-26 20:36:31.074958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027d40) on tqpair=0xfc3750 00:13:16.773 [2024-11-26 20:36:31.074970] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:13:16.773 [2024-11-26 20:36:31.074992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.773 [2024-11-26 20:36:31.074996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc3750) 00:13:16.773 ===================================================== 00:13:16.773 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:16.773 ===================================================== 00:13:16.773 Controller Capabilities/Features 00:13:16.773 ================================ 00:13:16.773 Vendor ID: 0000 00:13:16.773 Subsystem Vendor ID: 0000 00:13:16.773 Serial Number: .................... 00:13:16.773 Model Number: ........................................ 00:13:16.773 Firmware Version: 25.01 00:13:16.773 Recommended Arb Burst: 0 00:13:16.773 IEEE OUI Identifier: 00 00 00 00:13:16.773 Multi-path I/O 00:13:16.773 May have multiple subsystem ports: No 00:13:16.773 May have multiple controllers: No 00:13:16.773 Associated with SR-IOV VF: No 00:13:16.773 Max Data Transfer Size: 131072 00:13:16.773 Max Number of Namespaces: 0 00:13:16.773 Max Number of I/O Queues: 1024 00:13:16.773 NVMe Specification Version (VS): 1.3 00:13:16.773 NVMe Specification Version (Identify): 1.3 00:13:16.773 Maximum Queue Entries: 128 00:13:16.773 Contiguous Queues Required: Yes 00:13:16.773 Arbitration Mechanisms Supported 00:13:16.773 Weighted Round Robin: Not Supported 00:13:16.773 Vendor Specific: Not Supported 00:13:16.773 Reset Timeout: 15000 ms 00:13:16.773 Doorbell Stride: 4 bytes 00:13:16.773 NVM Subsystem Reset: Not Supported 00:13:16.773 Command Sets Supported 00:13:16.773 NVM Command Set: Supported 00:13:16.773 Boot Partition: Not Supported 00:13:16.773 Memory Page Size Minimum: 4096 bytes 00:13:16.773 Memory Page Size Maximum: 4096 bytes 00:13:16.773 Persistent Memory Region: Not Supported 00:13:16.773 Optional Asynchronous Events Supported 00:13:16.773 Namespace Attribute Notices: Not Supported 00:13:16.773 Firmware Activation Notices: Not Supported 00:13:16.773 ANA Change Notices: Not Supported 00:13:16.773 PLE Aggregate Log Change Notices: Not Supported 00:13:16.773 LBA Status Info Alert Notices: Not Supported 00:13:16.773 EGE Aggregate Log Change Notices: Not Supported 00:13:16.773 Normal NVM Subsystem Shutdown event: Not Supported 00:13:16.773 Zone Descriptor Change Notices: Not Supported 00:13:16.773 Discovery Log Change Notices: Supported 00:13:16.773 Controller Attributes 00:13:16.773 128-bit Host Identifier: Not Supported 00:13:16.773 Non-Operational Permissive Mode: Not Supported 00:13:16.773 NVM Sets: Not Supported 00:13:16.773 Read Recovery Levels: Not Supported 00:13:16.773 Endurance Groups: Not Supported 00:13:16.773 Predictable Latency Mode: Not Supported 00:13:16.773 Traffic Based Keep ALive: Not Supported 00:13:16.773 Namespace Granularity: Not Supported 00:13:16.773 SQ Associations: Not Supported 00:13:16.773 UUID List: Not Supported 00:13:16.773 Multi-Domain Subsystem: Not Supported 00:13:16.773 Fixed Capacity Management: Not Supported 00:13:16.773 Variable Capacity Management: Not Supported 00:13:16.773 Delete Endurance Group: Not Supported 00:13:16.773 Delete NVM Set: Not Supported 00:13:16.773 Extended LBA Formats Supported: Not Supported 00:13:16.773 Flexible Data Placement Supported: Not Supported 00:13:16.773 00:13:16.773 Controller Memory Buffer Support 00:13:16.773 ================================ 00:13:16.773 Supported: No 00:13:16.773 00:13:16.773 Persistent Memory Region Support 00:13:16.773 ================================ 00:13:16.773 Supported: No 00:13:16.773 00:13:16.773 Admin Command Set Attributes 00:13:16.773 ============================ 00:13:16.773 Security Send/Receive: Not Supported 00:13:16.773 Format NVM: Not Supported 00:13:16.773 Firmware Activate/Download: Not Supported 00:13:16.773 Namespace Management: Not Supported 00:13:16.773 Device Self-Test: Not Supported 00:13:16.773 Directives: Not Supported 00:13:16.773 NVMe-MI: Not Supported 00:13:16.773 Virtualization Management: Not Supported 00:13:16.773 Doorbell Buffer Config: Not Supported 00:13:16.773 Get LBA Status Capability: Not Supported 00:13:16.773 Command & Feature Lockdown Capability: Not Supported 00:13:16.773 Abort Command Limit: 1 00:13:16.773 Async Event Request Limit: 4 00:13:16.773 Number of Firmware Slots: N/A 00:13:16.773 Firmware Slot 1 Read-Only: N/A 00:13:16.773 Firmware Activation Without Reset: N/A 00:13:16.773 Multiple Update Detection Support: N/A 00:13:16.773 Firmware Update Granularity: No Information Provided 00:13:16.773 Per-Namespace SMART Log: No 00:13:16.773 Asymmetric Namespace Access Log Page: Not Supported 00:13:16.773 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:16.773 Command Effects Log Page: Not Supported 00:13:16.773 Get Log Page Extended Data: Supported 00:13:16.773 Telemetry Log Pages: Not Supported 00:13:16.773 Persistent Event Log Pages: Not Supported 00:13:16.773 Supported Log Pages Log Page: May Support 00:13:16.773 Commands Supported & Effects Log Page: Not Supported 00:13:16.773 Feature Identifiers & Effects Log Page:May Support 00:13:16.773 NVMe-MI Commands & Effects Log Page: May Support 00:13:16.773 Data Area 4 for Telemetry Log: Not Supported 00:13:16.773 Error Log Page Entries Supported: 128 00:13:16.773 Keep Alive: Not Supported 00:13:16.773 00:13:16.773 NVM Command Set Attributes 00:13:16.773 ========================== 00:13:16.773 Submission Queue Entry Size 00:13:16.773 Max: 1 00:13:16.773 Min: 1 00:13:16.773 Completion Queue Entry Size 00:13:16.773 Max: 1 00:13:16.773 Min: 1 00:13:16.773 Number of Namespaces: 0 00:13:16.773 Compare Command: Not Supported 00:13:16.773 Write Uncorrectable Command: Not Supported 00:13:16.773 Dataset Management Command: Not Supported 00:13:16.773 Write Zeroes Command: Not Supported 00:13:16.773 Set Features Save Field: Not Supported 00:13:16.773 Reservations: Not Supported 00:13:16.773 Timestamp: Not Supported 00:13:16.774 Copy: Not Supported 00:13:16.774 Volatile Write Cache: Not Present 00:13:16.774 Atomic Write Unit (Normal): 1 00:13:16.774 Atomic Write Unit (PFail): 1 00:13:16.774 Atomic Compare & Write Unit: 1 00:13:16.774 Fused Compare & Write: Supported 00:13:16.774 Scatter-Gather List 00:13:16.774 SGL Command Set: Supported 00:13:16.774 SGL Keyed: Supported 00:13:16.774 SGL Bit Bucket Descriptor: Not Supported 00:13:16.774 SGL Metadata Pointer: Not Supported 00:13:16.774 Oversized SGL: Not Supported 00:13:16.774 SGL Metadata Address: Not Supported 00:13:16.774 SGL Offset: Supported 00:13:16.774 Transport SGL Data Block: Not Supported 00:13:16.774 Replay Protected Memory Block: Not Supported 00:13:16.774 00:13:16.774 Firmware Slot Information 00:13:16.774 ========================= 00:13:16.774 Active slot: 0 00:13:16.774 00:13:16.774 00:13:16.774 Error Log 00:13:16.774 ========= 00:13:16.774 00:13:16.774 Active Namespaces 00:13:16.774 ================= 00:13:16.774 Discovery Log Page 00:13:16.774 ================== 00:13:16.774 Generation Counter: 2 00:13:16.774 Number of Records: 2 00:13:16.774 Record Format: 0 00:13:16.774 00:13:16.774 Discovery Log Entry 0 00:13:16.774 ---------------------- 00:13:16.774 Transport Type: 3 (TCP) 00:13:16.774 Address Family: 1 (IPv4) 00:13:16.774 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:16.774 Entry Flags: 00:13:16.774 Duplicate Returned Information: 1 00:13:16.774 Explicit Persistent Connection Support for Discovery: 1 00:13:16.774 Transport Requirements: 00:13:16.774 Secure Channel: Not Required 00:13:16.774 Port ID: 0 (0x0000) 00:13:16.774 Controller ID: 65535 (0xffff) 00:13:16.774 Admin Max SQ Size: 128 00:13:16.774 Transport Service Identifier: 4420 00:13:16.774 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:16.774 Transport Address: 10.0.0.3 00:13:16.774 Discovery Log Entry 1 00:13:16.774 ---------------------- 00:13:16.774 Transport Type: 3 (TCP) 00:13:16.774 Address Family: 1 (IPv4) 00:13:16.774 Subsystem Type: 2 (NVM Subsystem) 00:13:16.774 Entry Flags: 00:13:16.774 Duplicate Returned Information: 0 00:13:16.774 Explicit Persistent Connection Support for Discovery: 0 00:13:16.774 Transport Requirements: 00:13:16.774 Secure Channel: Not Required 00:13:16.774 Port ID: 0 (0x0000) 00:13:16.774 Controller ID: 65535 (0xffff) 00:13:16.774 Admin Max SQ Size: 128 00:13:16.774 Transport Service Identifier: 4420 00:13:16.774 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:16.774 Transport Address: 10.0.0.3 [2024-11-26 20:36:31.075001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.774 [2024-11-26 20:36:31.075007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfc3750) 00:13:16.774 [2024-11-26 20:36:31.075017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.774 [2024-11-26 20:36:31.075031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027d40, cid 4, qid 0 00:13:16.774 [2024-11-26 20:36:31.075035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027ec0, cid 5, qid 0 00:13:16.774 [2024-11-26 20:36:31.075120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.774 [2024-11-26 20:36:31.075125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.774 [2024-11-26 20:36:31.075127] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075130] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc3750): datao=0, datal=1024, cccid=4 00:13:16.774 [2024-11-26 20:36:31.075133] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1027d40) on tqpair(0xfc3750): expected_datao=0, payload_size=1024 00:13:16.774 [2024-11-26 20:36:31.075135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075141] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075143] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.774 [2024-11-26 20:36:31.075153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.774 [2024-11-26 20:36:31.075156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027ec0) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.774 [2024-11-26 20:36:31.075181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.774 [2024-11-26 20:36:31.075183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027d40) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc3750) 00:13:16.774 [2024-11-26 20:36:31.075202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.774 [2024-11-26 20:36:31.075215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027d40, cid 4, qid 0 00:13:16.774 [2024-11-26 20:36:31.075276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.774 [2024-11-26 20:36:31.075281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.774 [2024-11-26 20:36:31.075283] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075285] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc3750): datao=0, datal=3072, cccid=4 00:13:16.774 [2024-11-26 20:36:31.075288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1027d40) on tqpair(0xfc3750): expected_datao=0, payload_size=3072 00:13:16.774 [2024-11-26 20:36:31.075291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075296] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075299] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.774 [2024-11-26 20:36:31.075309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.774 [2024-11-26 20:36:31.075312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027d40) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfc3750) 00:13:16.774 [2024-11-26 20:36:31.075329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.774 [2024-11-26 20:36:31.075343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027d40, cid 4, qid 0 00:13:16.774 [2024-11-26 20:36:31.075397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.774 [2024-11-26 20:36:31.075401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.774 [2024-11-26 20:36:31.075404] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075406] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfc3750): datao=0, datal=8, cccid=4 00:13:16.774 [2024-11-26 20:36:31.075409] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1027d40) on tqpair(0xfc3750): expected_datao=0, payload_size=8 00:13:16.774 [2024-11-26 20:36:31.075412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075417] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075419] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.774 [2024-11-26 20:36:31.075434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.774 [2024-11-26 20:36:31.075436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027d40) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075510] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:13:16.774 [2024-11-26 20:36:31.075517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027740) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.774 [2024-11-26 20:36:31.075526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10278c0) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.774 [2024-11-26 20:36:31.075533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027a40) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.774 [2024-11-26 20:36:31.075539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.774 [2024-11-26 20:36:31.075543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.774 [2024-11-26 20:36:31.075551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.774 [2024-11-26 20:36:31.075556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.774 [2024-11-26 20:36:31.075562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.774 [2024-11-26 20:36:31.075575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.075628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.075633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.075637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.075645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.075657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.075670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.075727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.075732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.075734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.075740] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:13:16.775 [2024-11-26 20:36:31.075743] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:13:16.775 [2024-11-26 20:36:31.075750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.075760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.075770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.075802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.075807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.075809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.075820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.075830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.075840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.075881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.075887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.075889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.075899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.075909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.075919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.075960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.075965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.075967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.075977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.075982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.075988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.075997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.076036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.076041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.076043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.076053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.076064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.076073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.076107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.076112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.076115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.076126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.076136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.076146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.076183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.076192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.076194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.076205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.076215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.076226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.076262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.076267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.076269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.076279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.076290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.076300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.076336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.076341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.076343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.076353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.076363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.076373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.076410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.076415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.076417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.076427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.775 [2024-11-26 20:36:31.076437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.775 [2024-11-26 20:36:31.076447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.775 [2024-11-26 20:36:31.076483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.775 [2024-11-26 20:36:31.076493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.775 [2024-11-26 20:36:31.076495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.775 [2024-11-26 20:36:31.076506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.775 [2024-11-26 20:36:31.076509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.076516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.076526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.076565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.076570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.076572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.076583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.076604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.076615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.076654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.076659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.076662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.076672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.076682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.076692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.076729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.076733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.076736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.076746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.076756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.076766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.076812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.076817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.076819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.076830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.076840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.076850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.076889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.076894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.076896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.076907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.076917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.076927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.076964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.076970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.076973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.076983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.076988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.076994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.077004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.077040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.077046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.077049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.077059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.077070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.077080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.077124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.077133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.077135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.077146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.077156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.077166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.077203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.077209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.077211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.077222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.776 [2024-11-26 20:36:31.077232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.776 [2024-11-26 20:36:31.077242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.776 [2024-11-26 20:36:31.077284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.776 [2024-11-26 20:36:31.077289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.776 [2024-11-26 20:36:31.077291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.776 [2024-11-26 20:36:31.077294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.776 [2024-11-26 20:36:31.077301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.777 [2024-11-26 20:36:31.077312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.077322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.777 [2024-11-26 20:36:31.077356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.077361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.077363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.777 [2024-11-26 20:36:31.077373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.777 [2024-11-26 20:36:31.077385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.077395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.777 [2024-11-26 20:36:31.077429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.077434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.077436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.777 [2024-11-26 20:36:31.077446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.777 [2024-11-26 20:36:31.077456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.077466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.777 [2024-11-26 20:36:31.077506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.077510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.077513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.777 [2024-11-26 20:36:31.077523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.077528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.777 [2024-11-26 20:36:31.077533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.077543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.777 [2024-11-26 20:36:31.077581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.077586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.081612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.081618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.777 [2024-11-26 20:36:31.081630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.081633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.081636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfc3750) 00:13:16.777 [2024-11-26 20:36:31.081643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.081668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1027bc0, cid 3, qid 0 00:13:16.777 [2024-11-26 20:36:31.081717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.081722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.081725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.081728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1027bc0) on tqpair=0xfc3750 00:13:16.777 [2024-11-26 20:36:31.081734] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:13:16.777 00:13:16.777 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:16.777 [2024-11-26 20:36:31.114420] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:16.777 [2024-11-26 20:36:31.114456] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73170 ] 00:13:16.777 [2024-11-26 20:36:31.266650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:13:16.777 [2024-11-26 20:36:31.266718] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:16.777 [2024-11-26 20:36:31.266722] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:16.777 [2024-11-26 20:36:31.266736] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:16.777 [2024-11-26 20:36:31.266745] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:16.777 [2024-11-26 20:36:31.267009] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:13:16.777 [2024-11-26 20:36:31.267041] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x246f750 0 00:13:16.777 [2024-11-26 20:36:31.273606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:16.777 [2024-11-26 20:36:31.273623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:16.777 [2024-11-26 20:36:31.273627] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:16.777 [2024-11-26 20:36:31.273630] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:16.777 [2024-11-26 20:36:31.273658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.273663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.273667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.777 [2024-11-26 20:36:31.273681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:16.777 [2024-11-26 20:36:31.273704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.777 [2024-11-26 20:36:31.281606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.281622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.281625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.777 [2024-11-26 20:36:31.281638] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:16.777 [2024-11-26 20:36:31.281645] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:13:16.777 [2024-11-26 20:36:31.281650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:13:16.777 [2024-11-26 20:36:31.281668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.777 [2024-11-26 20:36:31.281682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.281702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.777 [2024-11-26 20:36:31.281757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.281762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.281764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.777 [2024-11-26 20:36:31.281771] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:13:16.777 [2024-11-26 20:36:31.281776] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:13:16.777 [2024-11-26 20:36:31.281782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.777 [2024-11-26 20:36:31.281792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.281803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.777 [2024-11-26 20:36:31.281847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.281852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.281854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.777 [2024-11-26 20:36:31.281861] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:13:16.777 [2024-11-26 20:36:31.281867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:16.777 [2024-11-26 20:36:31.281873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.777 [2024-11-26 20:36:31.281878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.777 [2024-11-26 20:36:31.281884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.777 [2024-11-26 20:36:31.281894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.777 [2024-11-26 20:36:31.281948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.777 [2024-11-26 20:36:31.281953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.777 [2024-11-26 20:36:31.281955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.281958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.778 [2024-11-26 20:36:31.281962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:16.778 [2024-11-26 20:36:31.281969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.281972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.281974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.281979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.778 [2024-11-26 20:36:31.281989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.778 [2024-11-26 20:36:31.282038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.778 [2024-11-26 20:36:31.282043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.778 [2024-11-26 20:36:31.282045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.778 [2024-11-26 20:36:31.282052] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:16.778 [2024-11-26 20:36:31.282055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:16.778 [2024-11-26 20:36:31.282060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:16.778 [2024-11-26 20:36:31.282164] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:13:16.778 [2024-11-26 20:36:31.282167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:16.778 [2024-11-26 20:36:31.282174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.778 [2024-11-26 20:36:31.282195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.778 [2024-11-26 20:36:31.282241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.778 [2024-11-26 20:36:31.282246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.778 [2024-11-26 20:36:31.282248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.778 [2024-11-26 20:36:31.282255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:16.778 [2024-11-26 20:36:31.282261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.778 [2024-11-26 20:36:31.282282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.778 [2024-11-26 20:36:31.282319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.778 [2024-11-26 20:36:31.282323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.778 [2024-11-26 20:36:31.282325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.778 [2024-11-26 20:36:31.282332] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:16.778 [2024-11-26 20:36:31.282335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:16.778 [2024-11-26 20:36:31.282340] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:13:16.778 [2024-11-26 20:36:31.282346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:16.778 [2024-11-26 20:36:31.282354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.778 [2024-11-26 20:36:31.282372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.778 [2024-11-26 20:36:31.282452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.778 [2024-11-26 20:36:31.282456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.778 [2024-11-26 20:36:31.282459] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282462] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=4096, cccid=0 00:13:16.778 [2024-11-26 20:36:31.282465] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d3740) on tqpair(0x246f750): expected_datao=0, payload_size=4096 00:13:16.778 [2024-11-26 20:36:31.282468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282475] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282478] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.778 [2024-11-26 20:36:31.282489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.778 [2024-11-26 20:36:31.282491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.778 [2024-11-26 20:36:31.282500] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:13:16.778 [2024-11-26 20:36:31.282503] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:13:16.778 [2024-11-26 20:36:31.282507] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:13:16.778 [2024-11-26 20:36:31.282513] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:13:16.778 [2024-11-26 20:36:31.282516] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:13:16.778 [2024-11-26 20:36:31.282519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:13:16.778 [2024-11-26 20:36:31.282525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:16.778 [2024-11-26 20:36:31.282530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:16.778 [2024-11-26 20:36:31.282551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.778 [2024-11-26 20:36:31.282605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.778 [2024-11-26 20:36:31.282611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.778 [2024-11-26 20:36:31.282613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.778 [2024-11-26 20:36:31.282622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.778 [2024-11-26 20:36:31.282637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.778 [2024-11-26 20:36:31.282651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.778 [2024-11-26 20:36:31.282664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.778 [2024-11-26 20:36:31.282677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:16.778 [2024-11-26 20:36:31.282682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:16.778 [2024-11-26 20:36:31.282687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.778 [2024-11-26 20:36:31.282690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246f750) 00:13:16.778 [2024-11-26 20:36:31.282695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.778 [2024-11-26 20:36:31.282710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3740, cid 0, qid 0 00:13:16.778 [2024-11-26 20:36:31.282715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d38c0, cid 1, qid 0 00:13:16.778 [2024-11-26 20:36:31.282718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3a40, cid 2, qid 0 00:13:16.778 [2024-11-26 20:36:31.282722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.778 [2024-11-26 20:36:31.282725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3d40, cid 4, qid 0 00:13:16.778 [2024-11-26 20:36:31.282812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.779 [2024-11-26 20:36:31.282817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.779 [2024-11-26 20:36:31.282819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.282822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3d40) on tqpair=0x246f750 00:13:16.779 [2024-11-26 20:36:31.282826] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:13:16.779 [2024-11-26 20:36:31.282829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.282835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.282840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.282844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.282847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.282849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246f750) 00:13:16.779 [2024-11-26 20:36:31.282855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:16.779 [2024-11-26 20:36:31.282865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3d40, cid 4, qid 0 00:13:16.779 [2024-11-26 20:36:31.282907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.779 [2024-11-26 20:36:31.282911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.779 [2024-11-26 20:36:31.282914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.282916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3d40) on tqpair=0x246f750 00:13:16.779 [2024-11-26 20:36:31.282976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.282982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.282988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.282990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246f750) 00:13:16.779 [2024-11-26 20:36:31.282996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.779 [2024-11-26 20:36:31.283006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3d40, cid 4, qid 0 00:13:16.779 [2024-11-26 20:36:31.283055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.779 [2024-11-26 20:36:31.283064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.779 [2024-11-26 20:36:31.283067] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283069] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=4096, cccid=4 00:13:16.779 [2024-11-26 20:36:31.283072] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d3d40) on tqpair(0x246f750): expected_datao=0, payload_size=4096 00:13:16.779 [2024-11-26 20:36:31.283075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283081] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283083] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.779 [2024-11-26 20:36:31.283095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.779 [2024-11-26 20:36:31.283097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3d40) on tqpair=0x246f750 00:13:16.779 [2024-11-26 20:36:31.283107] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:13:16.779 [2024-11-26 20:36:31.283114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246f750) 00:13:16.779 [2024-11-26 20:36:31.283134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.779 [2024-11-26 20:36:31.283145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3d40, cid 4, qid 0 00:13:16.779 [2024-11-26 20:36:31.283208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.779 [2024-11-26 20:36:31.283213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.779 [2024-11-26 20:36:31.283216] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283218] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=4096, cccid=4 00:13:16.779 [2024-11-26 20:36:31.283221] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d3d40) on tqpair(0x246f750): expected_datao=0, payload_size=4096 00:13:16.779 [2024-11-26 20:36:31.283224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283229] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283231] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.779 [2024-11-26 20:36:31.283242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.779 [2024-11-26 20:36:31.283244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3d40) on tqpair=0x246f750 00:13:16.779 [2024-11-26 20:36:31.283257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246f750) 00:13:16.779 [2024-11-26 20:36:31.283276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.779 [2024-11-26 20:36:31.283287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3d40, cid 4, qid 0 00:13:16.779 [2024-11-26 20:36:31.283335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.779 [2024-11-26 20:36:31.283340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.779 [2024-11-26 20:36:31.283342] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283344] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=4096, cccid=4 00:13:16.779 [2024-11-26 20:36:31.283347] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d3d40) on tqpair(0x246f750): expected_datao=0, payload_size=4096 00:13:16.779 [2024-11-26 20:36:31.283350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283355] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283357] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.779 [2024-11-26 20:36:31.283368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.779 [2024-11-26 20:36:31.283370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3d40) on tqpair=0x246f750 00:13:16.779 [2024-11-26 20:36:31.283378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283409] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:13:16.779 [2024-11-26 20:36:31.283413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:13:16.779 [2024-11-26 20:36:31.283416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:13:16.779 [2024-11-26 20:36:31.283430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246f750) 00:13:16.779 [2024-11-26 20:36:31.283437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.779 [2024-11-26 20:36:31.283443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x246f750) 00:13:16.779 [2024-11-26 20:36:31.283452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.779 [2024-11-26 20:36:31.283466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3d40, cid 4, qid 0 00:13:16.779 [2024-11-26 20:36:31.283470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3ec0, cid 5, qid 0 00:13:16.779 [2024-11-26 20:36:31.283533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.779 [2024-11-26 20:36:31.283538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.779 [2024-11-26 20:36:31.283540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3d40) on tqpair=0x246f750 00:13:16.779 [2024-11-26 20:36:31.283548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.779 [2024-11-26 20:36:31.283552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.779 [2024-11-26 20:36:31.283555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3ec0) on tqpair=0x246f750 00:13:16.779 [2024-11-26 20:36:31.283564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.779 [2024-11-26 20:36:31.283567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x246f750) 00:13:16.780 [2024-11-26 20:36:31.283572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.780 [2024-11-26 20:36:31.283582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3ec0, cid 5, qid 0 00:13:16.780 [2024-11-26 20:36:31.283637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.780 [2024-11-26 20:36:31.283642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.780 [2024-11-26 20:36:31.283645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3ec0) on tqpair=0x246f750 00:13:16.780 [2024-11-26 20:36:31.283655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x246f750) 00:13:16.780 [2024-11-26 20:36:31.283662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.780 [2024-11-26 20:36:31.283673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3ec0, cid 5, qid 0 00:13:16.780 [2024-11-26 20:36:31.283715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.780 [2024-11-26 20:36:31.283720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.780 [2024-11-26 20:36:31.283722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3ec0) on tqpair=0x246f750 00:13:16.780 [2024-11-26 20:36:31.283732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x246f750) 00:13:16.780 [2024-11-26 20:36:31.283739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.780 [2024-11-26 20:36:31.283749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3ec0, cid 5, qid 0 00:13:16.780 [2024-11-26 20:36:31.283795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.780 [2024-11-26 20:36:31.283804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.780 [2024-11-26 20:36:31.283807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3ec0) on tqpair=0x246f750 00:13:16.780 [2024-11-26 20:36:31.283821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x246f750) 00:13:16.780 [2024-11-26 20:36:31.283830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.780 [2024-11-26 20:36:31.283836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246f750) 00:13:16.780 [2024-11-26 20:36:31.283843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.780 [2024-11-26 20:36:31.283849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x246f750) 00:13:16.780 [2024-11-26 20:36:31.283857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.780 [2024-11-26 20:36:31.283863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.283865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x246f750) 00:13:16.780 [2024-11-26 20:36:31.283870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.780 [2024-11-26 20:36:31.283882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3ec0, cid 5, qid 0 00:13:16.780 [2024-11-26 20:36:31.283886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3d40, cid 4, qid 0 00:13:16.780 [2024-11-26 20:36:31.283889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d4040, cid 6, qid 0 00:13:16.780 [2024-11-26 20:36:31.283893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d41c0, cid 7, qid 0 00:13:16.780 [2024-11-26 20:36:31.284015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.780 [2024-11-26 20:36:31.284020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.780 [2024-11-26 20:36:31.284023] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.780 [2024-11-26 20:36:31.284025] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=8192, cccid=5 00:13:16.780 [2024-11-26 20:36:31.284028] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d3ec0) on tqpair(0x246f750): expected_datao=0, payload_size=8192 00:13:16.780 [2024-11-26 20:36:31.284031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.780 ===================================================== 00:13:16.780 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.780 ===================================================== 00:13:16.780 Controller Capabilities/Features 00:13:16.780 ================================ 00:13:16.780 Vendor ID: 8086 00:13:16.780 Subsystem Vendor ID: 8086 00:13:16.780 Serial Number: SPDK00000000000001 00:13:16.780 Model Number: SPDK bdev Controller 00:13:16.780 Firmware Version: 25.01 00:13:16.780 Recommended Arb Burst: 6 00:13:16.780 IEEE OUI Identifier: e4 d2 5c 00:13:16.780 Multi-path I/O 00:13:16.780 May have multiple subsystem ports: Yes 00:13:16.780 May have multiple controllers: Yes 00:13:16.780 Associated with SR-IOV VF: No 00:13:16.780 Max Data Transfer Size: 131072 00:13:16.780 Max Number of Namespaces: 32 00:13:16.780 Max Number of I/O Queues: 127 00:13:16.780 NVMe Specification Version (VS): 1.3 00:13:16.780 NVMe Specification Version (Identify): 1.3 00:13:16.780 Maximum Queue Entries: 128 00:13:16.780 Contiguous Queues Required: Yes 00:13:16.780 Arbitration Mechanisms Supported 00:13:16.780 Weighted Round Robin: Not Supported 00:13:16.780 Vendor Specific: Not Supported 00:13:16.780 Reset Timeout: 15000 ms 00:13:16.780 Doorbell Stride: 4 bytes 00:13:16.780 NVM Subsystem Reset: Not Supported 00:13:16.780 Command Sets Supported 00:13:16.780 NVM Command Set: Supported 00:13:16.780 Boot Partition: Not Supported 00:13:16.780 Memory Page Size Minimum: 4096 bytes 00:13:16.780 Memory Page Size Maximum: 4096 bytes 00:13:16.780 Persistent Memory Region: Not Supported 00:13:16.780 Optional Asynchronous Events Supported 00:13:16.780 Namespace Attribute Notices: Supported 00:13:16.780 Firmware Activation Notices: Not Supported 00:13:16.780 ANA Change Notices: Not Supported 00:13:16.780 PLE Aggregate Log Change Notices: Not Supported 00:13:16.780 LBA Status Info Alert Notices: Not Supported 00:13:16.780 EGE Aggregate Log Change Notices: Not Supported 00:13:16.780 Normal NVM Subsystem Shutdown event: Not Supported 00:13:16.780 Zone Descriptor Change Notices: Not Supported 00:13:16.780 Discovery Log Change Notices: Not Supported 00:13:16.780 Controller Attributes 00:13:16.780 128-bit Host Identifier: Supported 00:13:16.780 Non-Operational Permissive Mode: Not Supported 00:13:16.780 NVM Sets: Not Supported 00:13:16.780 Read Recovery Levels: Not Supported 00:13:16.780 Endurance Groups: Not Supported 00:13:16.780 Predictable Latency Mode: Not Supported 00:13:16.780 Traffic Based Keep ALive: Not Supported 00:13:16.780 Namespace Granularity: Not Supported 00:13:16.780 SQ Associations: Not Supported 00:13:16.780 UUID List: Not Supported 00:13:16.780 Multi-Domain Subsystem: Not Supported 00:13:16.780 Fixed Capacity Management: Not Supported 00:13:16.780 Variable Capacity Management: Not Supported 00:13:16.780 Delete Endurance Group: Not Supported 00:13:16.780 Delete NVM Set: Not Supported 00:13:16.780 Extended LBA Formats Supported: Not Supported 00:13:16.780 Flexible Data Placement Supported: Not Supported 00:13:16.780 00:13:16.780 Controller Memory Buffer Support 00:13:16.780 ================================ 00:13:16.780 Supported: No 00:13:16.780 00:13:16.780 Persistent Memory Region Support 00:13:16.780 ================================ 00:13:16.780 Supported: No 00:13:16.780 00:13:16.780 Admin Command Set Attributes 00:13:16.780 ============================ 00:13:16.780 Security Send/Receive: Not Supported 00:13:16.780 Format NVM: Not Supported 00:13:16.780 Firmware Activate/Download: Not Supported 00:13:16.780 Namespace Management: Not Supported 00:13:16.780 Device Self-Test: Not Supported 00:13:16.780 Directives: Not Supported 00:13:16.780 NVMe-MI: Not Supported 00:13:16.780 Virtualization Management: Not Supported 00:13:16.780 Doorbell Buffer Config: Not Supported 00:13:16.780 Get LBA Status Capability: Not Supported 00:13:16.781 Command & Feature Lockdown Capability: Not Supported 00:13:16.781 Abort Command Limit: 4 00:13:16.781 Async Event Request Limit: 4 00:13:16.781 Number of Firmware Slots: N/A 00:13:16.781 Firmware Slot 1 Read-Only: N/A 00:13:16.781 Firmware Activation Without Reset: [2024-11-26 20:36:31.284043] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284046] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.781 [2024-11-26 20:36:31.284055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.781 [2024-11-26 20:36:31.284057] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284060] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=512, cccid=4 00:13:16.781 [2024-11-26 20:36:31.284063] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d3d40) on tqpair(0x246f750): expected_datao=0, payload_size=512 00:13:16.781 [2024-11-26 20:36:31.284066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284071] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284073] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.781 [2024-11-26 20:36:31.284082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.781 [2024-11-26 20:36:31.284084] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284086] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=512, cccid=6 00:13:16.781 [2024-11-26 20:36:31.284089] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d4040) on tqpair(0x246f750): expected_datao=0, payload_size=512 00:13:16.781 [2024-11-26 20:36:31.284092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284099] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:16.781 [2024-11-26 20:36:31.284108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:16.781 [2024-11-26 20:36:31.284110] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284112] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246f750): datao=0, datal=4096, cccid=7 00:13:16.781 [2024-11-26 20:36:31.284115] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d41c0) on tqpair(0x246f750): expected_datao=0, payload_size=4096 00:13:16.781 [2024-11-26 20:36:31.284118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284124] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284126] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.781 [2024-11-26 20:36:31.284136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.781 [2024-11-26 20:36:31.284139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3ec0) on tqpair=0x246f750 00:13:16.781 [2024-11-26 20:36:31.284152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.781 [2024-11-26 20:36:31.284157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.781 [2024-11-26 20:36:31.284159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3d40) on tqpair=0x246f750 00:13:16.781 [2024-11-26 20:36:31.284171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.781 [2024-11-26 20:36:31.284175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.781 [2024-11-26 20:36:31.284177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d4040) on tqpair=0x246f750 00:13:16.781 [2024-11-26 20:36:31.284186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.781 [2024-11-26 20:36:31.284190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.781 [2024-11-26 20:36:31.284192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d41c0) on tqpair=0x246f750 00:13:16.781 N/A 00:13:16.781 Multiple Update Detection Support: N/A 00:13:16.781 Firmware Update Granularity: No Information Provided 00:13:16.781 Per-Namespace SMART Log: No 00:13:16.781 Asymmetric Namespace Access Log Page: Not Supported 00:13:16.781 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:16.781 Command Effects Log Page: Supported 00:13:16.781 Get Log Page Extended Data: Supported 00:13:16.781 Telemetry Log Pages: Not Supported 00:13:16.781 Persistent Event Log Pages: Not Supported 00:13:16.781 Supported Log Pages Log Page: May Support 00:13:16.781 Commands Supported & Effects Log Page: Not Supported 00:13:16.781 Feature Identifiers & Effects Log Page:May Support 00:13:16.781 NVMe-MI Commands & Effects Log Page: May Support 00:13:16.781 Data Area 4 for Telemetry Log: Not Supported 00:13:16.781 Error Log Page Entries Supported: 128 00:13:16.781 Keep Alive: Supported 00:13:16.781 Keep Alive Granularity: 10000 ms 00:13:16.781 00:13:16.781 NVM Command Set Attributes 00:13:16.781 ========================== 00:13:16.781 Submission Queue Entry Size 00:13:16.781 Max: 64 00:13:16.781 Min: 64 00:13:16.781 Completion Queue Entry Size 00:13:16.781 Max: 16 00:13:16.781 Min: 16 00:13:16.781 Number of Namespaces: 32 00:13:16.781 Compare Command: Supported 00:13:16.781 Write Uncorrectable Command: Not Supported 00:13:16.781 Dataset Management Command: Supported 00:13:16.781 Write Zeroes Command: Supported 00:13:16.781 Set Features Save Field: Not Supported 00:13:16.781 Reservations: Supported 00:13:16.781 Timestamp: Not Supported 00:13:16.781 Copy: Supported 00:13:16.781 Volatile Write Cache: Present 00:13:16.781 Atomic Write Unit (Normal): 1 00:13:16.781 Atomic Write Unit (PFail): 1 00:13:16.781 Atomic Compare & Write Unit: 1 00:13:16.781 Fused Compare & Write: Supported 00:13:16.781 Scatter-Gather List 00:13:16.781 SGL Command Set: Supported 00:13:16.781 SGL Keyed: Supported 00:13:16.781 SGL Bit Bucket Descriptor: Not Supported 00:13:16.781 SGL Metadata Pointer: Not Supported 00:13:16.781 Oversized SGL: Not Supported 00:13:16.781 SGL Metadata Address: Not Supported 00:13:16.781 SGL Offset: Supported 00:13:16.781 Transport SGL Data Block: Not Supported 00:13:16.781 Replay Protected Memory Block: Not Supported 00:13:16.781 00:13:16.781 Firmware Slot Information 00:13:16.781 ========================= 00:13:16.781 Active slot: 1 00:13:16.781 Slot 1 Firmware Revision: 25.01 00:13:16.781 00:13:16.781 00:13:16.781 Commands Supported and Effects 00:13:16.781 ============================== 00:13:16.781 Admin Commands 00:13:16.781 -------------- 00:13:16.781 Get Log Page (02h): Supported 00:13:16.781 Identify (06h): Supported 00:13:16.781 Abort (08h): Supported 00:13:16.781 Set Features (09h): Supported 00:13:16.781 Get Features (0Ah): Supported 00:13:16.781 Asynchronous Event Request (0Ch): Supported 00:13:16.781 Keep Alive (18h): Supported 00:13:16.781 I/O Commands 00:13:16.781 ------------ 00:13:16.781 Flush (00h): Supported LBA-Change 00:13:16.781 Write (01h): Supported LBA-Change 00:13:16.781 Read (02h): Supported 00:13:16.781 Compare (05h): Supported 00:13:16.781 Write Zeroes (08h): Supported LBA-Change 00:13:16.781 Dataset Management (09h): Supported LBA-Change 00:13:16.781 Copy (19h): Supported LBA-Change 00:13:16.781 00:13:16.781 Error Log 00:13:16.781 ========= 00:13:16.781 00:13:16.781 Arbitration 00:13:16.781 =========== 00:13:16.781 Arbitration Burst: 1 00:13:16.781 00:13:16.781 Power Management 00:13:16.781 ================ 00:13:16.781 Number of Power States: 1 00:13:16.781 Current Power State: Power State #0 00:13:16.781 Power State #0: 00:13:16.781 Max Power: 0.00 W 00:13:16.781 Non-Operational State: Operational 00:13:16.781 Entry Latency: Not Reported 00:13:16.781 Exit Latency: Not Reported 00:13:16.781 Relative Read Throughput: 0 00:13:16.781 Relative Read Latency: 0 00:13:16.781 Relative Write Throughput: 0 00:13:16.781 Relative Write Latency: 0 00:13:16.781 Idle Power: Not Reported 00:13:16.781 Active Power: Not Reported 00:13:16.781 Non-Operational Permissive Mode: Not Supported 00:13:16.781 00:13:16.781 Health Information 00:13:16.781 ================== 00:13:16.781 Critical Warnings: 00:13:16.781 Available Spare Space: OK 00:13:16.781 Temperature: OK 00:13:16.781 Device Reliability: OK 00:13:16.781 Read Only: No 00:13:16.781 Volatile Memory Backup: OK 00:13:16.781 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:16.781 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:16.781 Available Spare: 0% 00:13:16.781 Available Spare Threshold: 0% 00:13:16.781 Life Percentage Used:[2024-11-26 20:36:31.284281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.781 [2024-11-26 20:36:31.284285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x246f750) 00:13:16.781 [2024-11-26 20:36:31.284291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.781 [2024-11-26 20:36:31.284303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d41c0, cid 7, qid 0 00:13:16.782 [2024-11-26 20:36:31.284349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d41c0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284384] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:13:16.782 [2024-11-26 20:36:31.284390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3740) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.782 [2024-11-26 20:36:31.284399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d38c0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.782 [2024-11-26 20:36:31.284405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3a40) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.782 [2024-11-26 20:36:31.284412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.782 [2024-11-26 20:36:31.284422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.284432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.284446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.284483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.284508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.284520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.284569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284582] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:13:16.782 [2024-11-26 20:36:31.284585] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:13:16.782 [2024-11-26 20:36:31.284605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.284616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.284627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.284671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.284699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.284709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.284748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.284780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.284790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.284831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.284858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.284868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.284906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.284939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.284948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.284982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.284987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.284989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.284992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.284999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.285009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.285019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.285060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.285065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.285067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.285077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.285087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.285097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.285138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.285143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.285145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.782 [2024-11-26 20:36:31.285155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.782 [2024-11-26 20:36:31.285165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.782 [2024-11-26 20:36:31.285175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.782 [2024-11-26 20:36:31.285211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.782 [2024-11-26 20:36:31.285216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.782 [2024-11-26 20:36:31.285218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.782 [2024-11-26 20:36:31.285221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.783 [2024-11-26 20:36:31.285228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.783 [2024-11-26 20:36:31.285238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.783 [2024-11-26 20:36:31.285248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.783 [2024-11-26 20:36:31.285281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.783 [2024-11-26 20:36:31.285286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.783 [2024-11-26 20:36:31.285288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.783 [2024-11-26 20:36:31.285298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.783 [2024-11-26 20:36:31.285308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.783 [2024-11-26 20:36:31.285318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.783 [2024-11-26 20:36:31.285356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.783 [2024-11-26 20:36:31.285361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.783 [2024-11-26 20:36:31.285363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.783 [2024-11-26 20:36:31.285373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.783 [2024-11-26 20:36:31.285383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.783 [2024-11-26 20:36:31.285393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.783 [2024-11-26 20:36:31.285425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.783 [2024-11-26 20:36:31.285430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.783 [2024-11-26 20:36:31.285432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.783 [2024-11-26 20:36:31.285442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.783 [2024-11-26 20:36:31.285452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.783 [2024-11-26 20:36:31.285463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.783 [2024-11-26 20:36:31.285504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.783 [2024-11-26 20:36:31.285509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.783 [2024-11-26 20:36:31.285511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.783 [2024-11-26 20:36:31.285521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.285526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.783 [2024-11-26 20:36:31.285531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.783 [2024-11-26 20:36:31.285541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.783 [2024-11-26 20:36:31.285579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.783 [2024-11-26 20:36:31.285584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.783 [2024-11-26 20:36:31.285586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.289611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.783 [2024-11-26 20:36:31.289622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.289625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.289628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246f750) 00:13:16.783 [2024-11-26 20:36:31.289635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.783 [2024-11-26 20:36:31.289655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d3bc0, cid 3, qid 0 00:13:16.783 [2024-11-26 20:36:31.289697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:16.783 [2024-11-26 20:36:31.289702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:16.783 [2024-11-26 20:36:31.289705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:16.783 [2024-11-26 20:36:31.289708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d3bc0) on tqpair=0x246f750 00:13:16.783 [2024-11-26 20:36:31.289714] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:13:16.783 0% 00:13:16.783 Data Units Read: 0 00:13:16.783 Data Units Written: 0 00:13:16.783 Host Read Commands: 0 00:13:16.783 Host Write Commands: 0 00:13:16.783 Controller Busy Time: 0 minutes 00:13:16.783 Power Cycles: 0 00:13:16.783 Power On Hours: 0 hours 00:13:16.783 Unsafe Shutdowns: 0 00:13:16.783 Unrecoverable Media Errors: 0 00:13:16.783 Lifetime Error Log Entries: 0 00:13:16.783 Warning Temperature Time: 0 minutes 00:13:16.783 Critical Temperature Time: 0 minutes 00:13:16.783 00:13:16.783 Number of Queues 00:13:16.783 ================ 00:13:16.783 Number of I/O Submission Queues: 127 00:13:16.783 Number of I/O Completion Queues: 127 00:13:16.783 00:13:16.783 Active Namespaces 00:13:16.783 ================= 00:13:16.783 Namespace ID:1 00:13:16.783 Error Recovery Timeout: Unlimited 00:13:16.783 Command Set Identifier: NVM (00h) 00:13:16.783 Deallocate: Supported 00:13:16.783 Deallocated/Unwritten Error: Not Supported 00:13:16.783 Deallocated Read Value: Unknown 00:13:16.783 Deallocate in Write Zeroes: Not Supported 00:13:16.783 Deallocated Guard Field: 0xFFFF 00:13:16.783 Flush: Supported 00:13:16.783 Reservation: Supported 00:13:16.783 Namespace Sharing Capabilities: Multiple Controllers 00:13:16.783 Size (in LBAs): 131072 (0GiB) 00:13:16.783 Capacity (in LBAs): 131072 (0GiB) 00:13:16.783 Utilization (in LBAs): 131072 (0GiB) 00:13:16.783 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:16.783 EUI64: ABCDEF0123456789 00:13:16.783 UUID: 9108cbe1-d01b-41c9-8e49-e2684672bc6b 00:13:16.783 Thin Provisioning: Not Supported 00:13:16.783 Per-NS Atomic Units: Yes 00:13:16.783 Atomic Boundary Size (Normal): 0 00:13:16.783 Atomic Boundary Size (PFail): 0 00:13:16.783 Atomic Boundary Offset: 0 00:13:16.783 Maximum Single Source Range Length: 65535 00:13:16.783 Maximum Copy Length: 65535 00:13:16.783 Maximum Source Range Count: 1 00:13:16.783 NGUID/EUI64 Never Reused: No 00:13:16.783 Namespace Write Protected: No 00:13:16.783 Number of LBA Formats: 1 00:13:16.783 Current LBA Format: LBA Format #00 00:13:16.783 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:16.783 00:13:16.783 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.041 rmmod nvme_tcp 00:13:17.041 rmmod nvme_fabrics 00:13:17.041 rmmod nvme_keyring 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73133 ']' 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73133 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73133 ']' 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73133 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73133 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.041 killing process with pid 73133 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73133' 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73133 00:13:17.041 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73133 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:13:17.300 00:13:17.300 real 0m2.585s 00:13:17.300 user 0m6.609s 00:13:17.300 sys 0m0.637s 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.300 20:36:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:17.300 ************************************ 00:13:17.300 END TEST nvmf_identify 00:13:17.300 ************************************ 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:17.559 ************************************ 00:13:17.559 START TEST nvmf_perf 00:13:17.559 ************************************ 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:17.559 * Looking for test storage... 00:13:17.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:17.559 20:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:13:17.559 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:17.559 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.559 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.559 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.559 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.559 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.559 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:17.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.560 --rc genhtml_branch_coverage=1 00:13:17.560 --rc genhtml_function_coverage=1 00:13:17.560 --rc genhtml_legend=1 00:13:17.560 --rc geninfo_all_blocks=1 00:13:17.560 --rc geninfo_unexecuted_blocks=1 00:13:17.560 00:13:17.560 ' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:17.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.560 --rc genhtml_branch_coverage=1 00:13:17.560 --rc genhtml_function_coverage=1 00:13:17.560 --rc genhtml_legend=1 00:13:17.560 --rc geninfo_all_blocks=1 00:13:17.560 --rc geninfo_unexecuted_blocks=1 00:13:17.560 00:13:17.560 ' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:17.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.560 --rc genhtml_branch_coverage=1 00:13:17.560 --rc genhtml_function_coverage=1 00:13:17.560 --rc genhtml_legend=1 00:13:17.560 --rc geninfo_all_blocks=1 00:13:17.560 --rc geninfo_unexecuted_blocks=1 00:13:17.560 00:13:17.560 ' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:17.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.560 --rc genhtml_branch_coverage=1 00:13:17.560 --rc genhtml_function_coverage=1 00:13:17.560 --rc genhtml_legend=1 00:13:17.560 --rc geninfo_all_blocks=1 00:13:17.560 --rc geninfo_unexecuted_blocks=1 00:13:17.560 00:13:17.560 ' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.560 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.560 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:17.561 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:17.819 Cannot find device "nvmf_init_br" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:17.819 Cannot find device "nvmf_init_br2" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:17.819 Cannot find device "nvmf_tgt_br" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.819 Cannot find device "nvmf_tgt_br2" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:17.819 Cannot find device "nvmf_init_br" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:17.819 Cannot find device "nvmf_init_br2" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:17.819 Cannot find device "nvmf_tgt_br" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:17.819 Cannot find device "nvmf_tgt_br2" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:17.819 Cannot find device "nvmf_br" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:17.819 Cannot find device "nvmf_init_if" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:17.819 Cannot find device "nvmf_init_if2" 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:17.819 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:18.077 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:18.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:13:18.078 00:13:18.078 --- 10.0.0.3 ping statistics --- 00:13:18.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.078 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:18.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:18.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:13:18.078 00:13:18.078 --- 10.0.0.4 ping statistics --- 00:13:18.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.078 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:18.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:13:18.078 00:13:18.078 --- 10.0.0.1 ping statistics --- 00:13:18.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.078 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:18.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:18.078 00:13:18.078 --- 10.0.0.2 ping statistics --- 00:13:18.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.078 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=73389 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 73389 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 73389 ']' 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:18.078 20:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.078 [2024-11-26 20:36:32.525183] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:18.078 [2024-11-26 20:36:32.525258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.337 [2024-11-26 20:36:32.668862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.337 [2024-11-26 20:36:32.713566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.337 [2024-11-26 20:36:32.713620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.337 [2024-11-26 20:36:32.713626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.337 [2024-11-26 20:36:32.713632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.337 [2024-11-26 20:36:32.713637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.337 [2024-11-26 20:36:32.714741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.337 [2024-11-26 20:36:32.715023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.337 [2024-11-26 20:36:32.715841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.337 [2024-11-26 20:36:32.715845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.337 [2024-11-26 20:36:32.758144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:18.903 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:19.535 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:19.535 20:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:19.535 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:13:19.535 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.793 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:19.793 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:13:19.793 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:19.793 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:19.793 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:20.051 [2024-11-26 20:36:34.424323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.051 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:20.310 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:20.310 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.567 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:20.567 20:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:20.567 20:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:20.825 [2024-11-26 20:36:35.277957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:20.825 20:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:21.083 20:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:13:21.083 20:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:21.083 20:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:21.083 20:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:22.456 Initializing NVMe Controllers 00:13:22.456 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:22.456 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:22.456 Initialization complete. Launching workers. 00:13:22.456 ======================================================== 00:13:22.456 Latency(us) 00:13:22.456 Device Information : IOPS MiB/s Average min max 00:13:22.456 PCIE (0000:00:10.0) NSID 1 from core 0: 16331.20 63.79 1959.02 232.78 9498.90 00:13:22.456 ======================================================== 00:13:22.456 Total : 16331.20 63.79 1959.02 232.78 9498.90 00:13:22.456 00:13:22.456 20:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:23.545 Initializing NVMe Controllers 00:13:23.545 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:23.545 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:23.545 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:23.545 Initialization complete. Launching workers. 00:13:23.545 ======================================================== 00:13:23.545 Latency(us) 00:13:23.545 Device Information : IOPS MiB/s Average min max 00:13:23.545 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4680.41 18.28 213.37 77.95 5054.11 00:13:23.545 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.51 0.48 8096.22 7014.31 12007.82 00:13:23.545 ======================================================== 00:13:23.546 Total : 4803.92 18.77 416.04 77.95 12007.82 00:13:23.546 00:13:23.546 20:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:24.918 Initializing NVMe Controllers 00:13:24.918 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.918 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:24.918 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:24.918 Initialization complete. Launching workers. 00:13:24.918 ======================================================== 00:13:24.918 Latency(us) 00:13:24.918 Device Information : IOPS MiB/s Average min max 00:13:24.918 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9140.95 35.71 3500.40 496.91 8546.24 00:13:24.918 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3872.35 15.13 8264.79 4861.04 17702.39 00:13:24.918 ======================================================== 00:13:24.918 Total : 13013.30 50.83 4918.13 496.91 17702.39 00:13:24.918 00:13:24.918 20:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:24.918 20:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:27.442 Initializing NVMe Controllers 00:13:27.442 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.442 Controller IO queue size 128, less than required. 00:13:27.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.442 Controller IO queue size 128, less than required. 00:13:27.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.442 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:27.442 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:27.442 Initialization complete. Launching workers. 00:13:27.442 ======================================================== 00:13:27.442 Latency(us) 00:13:27.442 Device Information : IOPS MiB/s Average min max 00:13:27.442 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2192.58 548.15 58981.25 30728.20 88403.16 00:13:27.442 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 677.75 169.44 197038.84 56403.83 295347.09 00:13:27.442 ======================================================== 00:13:27.442 Total : 2870.34 717.58 91579.85 30728.20 295347.09 00:13:27.442 00:13:27.442 20:36:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:13:27.700 Initializing NVMe Controllers 00:13:27.700 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.700 Controller IO queue size 128, less than required. 00:13:27.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.700 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:27.700 Controller IO queue size 128, less than required. 00:13:27.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.700 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:27.700 WARNING: Some requested NVMe devices were skipped 00:13:27.700 No valid NVMe controllers or AIO or URING devices found 00:13:27.700 20:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:13:30.226 Initializing NVMe Controllers 00:13:30.226 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.226 Controller IO queue size 128, less than required. 00:13:30.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:30.226 Controller IO queue size 128, less than required. 00:13:30.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:30.226 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:30.226 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:30.226 Initialization complete. Launching workers. 00:13:30.226 00:13:30.226 ==================== 00:13:30.226 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:30.226 TCP transport: 00:13:30.226 polls: 12586 00:13:30.226 idle_polls: 7986 00:13:30.226 sock_completions: 4600 00:13:30.226 nvme_completions: 8111 00:13:30.226 submitted_requests: 12206 00:13:30.226 queued_requests: 1 00:13:30.226 00:13:30.226 ==================== 00:13:30.226 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:30.226 TCP transport: 00:13:30.226 polls: 15767 00:13:30.226 idle_polls: 10394 00:13:30.226 sock_completions: 5373 00:13:30.226 nvme_completions: 8143 00:13:30.226 submitted_requests: 12286 00:13:30.226 queued_requests: 1 00:13:30.226 ======================================================== 00:13:30.226 Latency(us) 00:13:30.226 Device Information : IOPS MiB/s Average min max 00:13:30.226 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2026.95 506.74 64164.70 33019.38 104282.69 00:13:30.226 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2034.95 508.74 63421.90 27870.47 114593.94 00:13:30.226 ======================================================== 00:13:30.226 Total : 4061.90 1015.48 63792.57 27870.47 114593.94 00:13:30.226 00:13:30.226 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:13:30.226 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.484 rmmod nvme_tcp 00:13:30.484 rmmod nvme_fabrics 00:13:30.484 rmmod nvme_keyring 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 73389 ']' 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 73389 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 73389 ']' 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 73389 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.484 20:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73389 00:13:30.484 killing process with pid 73389 00:13:30.484 20:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.484 20:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.484 20:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73389' 00:13:30.484 20:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 73389 00:13:30.484 20:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 73389 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:13:32.383 00:13:32.383 real 0m14.865s 00:13:32.383 user 0m53.102s 00:13:32.383 sys 0m3.426s 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:32.383 ************************************ 00:13:32.383 END TEST nvmf_perf 00:13:32.383 ************************************ 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:32.383 ************************************ 00:13:32.383 START TEST nvmf_fio_host 00:13:32.383 ************************************ 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:32.383 * Looking for test storage... 00:13:32.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.383 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:32.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.642 --rc genhtml_branch_coverage=1 00:13:32.642 --rc genhtml_function_coverage=1 00:13:32.642 --rc genhtml_legend=1 00:13:32.642 --rc geninfo_all_blocks=1 00:13:32.642 --rc geninfo_unexecuted_blocks=1 00:13:32.642 00:13:32.642 ' 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:32.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.642 --rc genhtml_branch_coverage=1 00:13:32.642 --rc genhtml_function_coverage=1 00:13:32.642 --rc genhtml_legend=1 00:13:32.642 --rc geninfo_all_blocks=1 00:13:32.642 --rc geninfo_unexecuted_blocks=1 00:13:32.642 00:13:32.642 ' 00:13:32.642 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:32.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.642 --rc genhtml_branch_coverage=1 00:13:32.642 --rc genhtml_function_coverage=1 00:13:32.642 --rc genhtml_legend=1 00:13:32.642 --rc geninfo_all_blocks=1 00:13:32.642 --rc geninfo_unexecuted_blocks=1 00:13:32.642 00:13:32.642 ' 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:32.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.643 --rc genhtml_branch_coverage=1 00:13:32.643 --rc genhtml_function_coverage=1 00:13:32.643 --rc genhtml_legend=1 00:13:32.643 --rc geninfo_all_blocks=1 00:13:32.643 --rc geninfo_unexecuted_blocks=1 00:13:32.643 00:13:32.643 ' 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.643 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.643 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:32.644 Cannot find device "nvmf_init_br" 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:13:32.644 20:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:32.644 Cannot find device "nvmf_init_br2" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:32.644 Cannot find device "nvmf_tgt_br" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.644 Cannot find device "nvmf_tgt_br2" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:32.644 Cannot find device "nvmf_init_br" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:32.644 Cannot find device "nvmf_init_br2" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:32.644 Cannot find device "nvmf_tgt_br" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:32.644 Cannot find device "nvmf_tgt_br2" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:32.644 Cannot find device "nvmf_br" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:32.644 Cannot find device "nvmf_init_if" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:32.644 Cannot find device "nvmf_init_if2" 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.644 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:32.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:13:32.904 00:13:32.904 --- 10.0.0.3 ping statistics --- 00:13:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.904 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:32.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:32.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:13:32.904 00:13:32.904 --- 10.0.0.4 ping statistics --- 00:13:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.904 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:32.904 00:13:32.904 --- 10.0.0.1 ping statistics --- 00:13:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.904 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:32.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:32.904 00:13:32.904 --- 10.0.0.2 ping statistics --- 00:13:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.904 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=73841 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 73841 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 73841 ']' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.904 20:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:32.904 [2024-11-26 20:36:47.344953] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:32.904 [2024-11-26 20:36:47.345034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.163 [2024-11-26 20:36:47.484584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.163 [2024-11-26 20:36:47.523753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.163 [2024-11-26 20:36:47.523797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.163 [2024-11-26 20:36:47.523804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.163 [2024-11-26 20:36:47.523809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.163 [2024-11-26 20:36:47.523813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.163 [2024-11-26 20:36:47.524583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.163 [2024-11-26 20:36:47.524787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.163 [2024-11-26 20:36:47.525380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.163 [2024-11-26 20:36:47.525143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.163 [2024-11-26 20:36:47.560107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.730 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.730 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:13:33.730 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.988 [2024-11-26 20:36:48.444703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.988 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:13:33.988 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.988 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:33.988 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:34.247 Malloc1 00:13:34.247 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:34.505 20:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.766 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:35.029 [2024-11-26 20:36:49.346697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:13:35.029 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:35.030 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:35.289 20:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:35.289 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:35.289 fio-3.35 00:13:35.289 Starting 1 thread 00:13:37.816 00:13:37.816 test: (groupid=0, jobs=1): err= 0: pid=73919: Tue Nov 26 20:36:52 2024 00:13:37.816 read: IOPS=9830, BW=38.4MiB/s (40.3MB/s)(77.0MiB/2006msec) 00:13:37.816 slat (nsec): min=1900, max=164875, avg=2024.35, stdev=1590.75 00:13:37.816 clat (usec): min=1791, max=11864, avg=6795.69, stdev=493.99 00:13:37.816 lat (usec): min=1811, max=11866, avg=6797.71, stdev=493.82 00:13:37.816 clat percentiles (usec): 00:13:37.816 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6456], 00:13:37.816 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6849], 00:13:37.816 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:13:37.816 | 99.00th=[ 8029], 99.50th=[ 8717], 99.90th=[10159], 99.95th=[11338], 00:13:37.816 | 99.99th=[11731] 00:13:37.816 bw ( KiB/s): min=38808, max=39800, per=99.97%, avg=39308.00, stdev=475.02, samples=4 00:13:37.816 iops : min= 9702, max= 9950, avg=9827.00, stdev=118.75, samples=4 00:13:37.816 write: IOPS=9845, BW=38.5MiB/s (40.3MB/s)(77.2MiB/2006msec); 0 zone resets 00:13:37.816 slat (nsec): min=1937, max=120520, avg=2171.82, stdev=1059.47 00:13:37.816 clat (usec): min=1363, max=11847, avg=6165.03, stdev=455.42 00:13:37.816 lat (usec): min=1380, max=11850, avg=6167.20, stdev=455.36 00:13:37.816 clat percentiles (usec): 00:13:37.816 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5866], 00:13:37.816 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:13:37.816 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6783], 00:13:37.816 | 99.00th=[ 7242], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[11338], 00:13:37.816 | 99.99th=[11863] 00:13:37.816 bw ( KiB/s): min=39240, max=39680, per=99.99%, avg=39378.00, stdev=203.06, samples=4 00:13:37.816 iops : min= 9810, max= 9920, avg=9844.50, stdev=50.76, samples=4 00:13:37.816 lat (msec) : 2=0.04%, 4=0.11%, 10=99.74%, 20=0.11% 00:13:37.816 cpu : usr=78.20%, sys=17.06%, ctx=11, majf=0, minf=7 00:13:37.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:13:37.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.816 issued rwts: total=19719,19751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.816 00:13:37.816 Run status group 0 (all jobs): 00:13:37.816 READ: bw=38.4MiB/s (40.3MB/s), 38.4MiB/s-38.4MiB/s (40.3MB/s-40.3MB/s), io=77.0MiB (80.8MB), run=2006-2006msec 00:13:37.816 WRITE: bw=38.5MiB/s (40.3MB/s), 38.5MiB/s-38.5MiB/s (40.3MB/s-40.3MB/s), io=77.2MiB (80.9MB), run=2006-2006msec 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:37.816 20:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:37.816 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:13:37.816 fio-3.35 00:13:37.816 Starting 1 thread 00:13:40.343 00:13:40.343 test: (groupid=0, jobs=1): err= 0: pid=73967: Tue Nov 26 20:36:54 2024 00:13:40.343 read: IOPS=9132, BW=143MiB/s (150MB/s)(286MiB/2005msec) 00:13:40.343 slat (usec): min=3, max=172, avg= 3.46, stdev= 1.98 00:13:40.343 clat (usec): min=2757, max=16933, avg=7935.68, stdev=2493.41 00:13:40.343 lat (usec): min=2760, max=16936, avg=7939.13, stdev=2493.48 00:13:40.343 clat percentiles (usec): 00:13:40.343 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5669], 00:13:40.343 | 30.00th=[ 6390], 40.00th=[ 6980], 50.00th=[ 7635], 60.00th=[ 8356], 00:13:40.343 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[12518], 00:13:40.343 | 99.00th=[15008], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:13:40.343 | 99.99th=[16909] 00:13:40.343 bw ( KiB/s): min=63200, max=79584, per=49.14%, avg=71808.00, stdev=6718.98, samples=4 00:13:40.343 iops : min= 3950, max= 4974, avg=4488.00, stdev=419.94, samples=4 00:13:40.343 write: IOPS=5370, BW=83.9MiB/s (88.0MB/s)(147MiB/1749msec); 0 zone resets 00:13:40.343 slat (usec): min=36, max=312, avg=38.75, stdev= 6.79 00:13:40.343 clat (usec): min=2909, max=20389, avg=10888.01, stdev=1960.74 00:13:40.343 lat (usec): min=2946, max=20426, avg=10926.76, stdev=1960.96 00:13:40.343 clat percentiles (usec): 00:13:40.343 | 1.00th=[ 7111], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:13:40.343 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:13:40.343 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13304], 95.00th=[14353], 00:13:40.343 | 99.00th=[16909], 99.50th=[17695], 99.90th=[20055], 99.95th=[20055], 00:13:40.343 | 99.99th=[20317] 00:13:40.343 bw ( KiB/s): min=67264, max=82016, per=86.90%, avg=74672.00, stdev=6261.02, samples=4 00:13:40.343 iops : min= 4204, max= 5126, avg=4667.00, stdev=391.31, samples=4 00:13:40.343 lat (msec) : 4=1.49%, 10=63.16%, 20=35.33%, 50=0.02% 00:13:40.343 cpu : usr=85.08%, sys=11.58%, ctx=34, majf=0, minf=10 00:13:40.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.343 issued rwts: total=18311,9393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.343 00:13:40.343 Run status group 0 (all jobs): 00:13:40.343 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=286MiB (300MB), run=2005-2005msec 00:13:40.343 WRITE: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=147MiB (154MB), run=1749-1749msec 00:13:40.343 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.343 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:13:40.343 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:40.343 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:13:40.343 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:13:40.343 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.343 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.601 rmmod nvme_tcp 00:13:40.601 rmmod nvme_fabrics 00:13:40.601 rmmod nvme_keyring 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 73841 ']' 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 73841 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 73841 ']' 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 73841 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.601 20:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73841 00:13:40.602 killing process with pid 73841 00:13:40.602 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.602 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.602 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73841' 00:13:40.602 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 73841 00:13:40.602 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 73841 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:13:40.860 00:13:40.860 real 0m8.566s 00:13:40.860 user 0m35.018s 00:13:40.860 sys 0m1.927s 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.860 20:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:40.860 ************************************ 00:13:40.860 END TEST nvmf_fio_host 00:13:40.860 ************************************ 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:41.121 ************************************ 00:13:41.121 START TEST nvmf_failover 00:13:41.121 ************************************ 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:41.121 * Looking for test storage... 00:13:41.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.121 --rc genhtml_branch_coverage=1 00:13:41.121 --rc genhtml_function_coverage=1 00:13:41.121 --rc genhtml_legend=1 00:13:41.121 --rc geninfo_all_blocks=1 00:13:41.121 --rc geninfo_unexecuted_blocks=1 00:13:41.121 00:13:41.121 ' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.121 --rc genhtml_branch_coverage=1 00:13:41.121 --rc genhtml_function_coverage=1 00:13:41.121 --rc genhtml_legend=1 00:13:41.121 --rc geninfo_all_blocks=1 00:13:41.121 --rc geninfo_unexecuted_blocks=1 00:13:41.121 00:13:41.121 ' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.121 --rc genhtml_branch_coverage=1 00:13:41.121 --rc genhtml_function_coverage=1 00:13:41.121 --rc genhtml_legend=1 00:13:41.121 --rc geninfo_all_blocks=1 00:13:41.121 --rc geninfo_unexecuted_blocks=1 00:13:41.121 00:13:41.121 ' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.121 --rc genhtml_branch_coverage=1 00:13:41.121 --rc genhtml_function_coverage=1 00:13:41.121 --rc genhtml_legend=1 00:13:41.121 --rc geninfo_all_blocks=1 00:13:41.121 --rc geninfo_unexecuted_blocks=1 00:13:41.121 00:13:41.121 ' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:41.122 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:41.122 Cannot find device "nvmf_init_br" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:41.122 Cannot find device "nvmf_init_br2" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:41.122 Cannot find device "nvmf_tgt_br" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.122 Cannot find device "nvmf_tgt_br2" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:41.122 Cannot find device "nvmf_init_br" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:41.122 Cannot find device "nvmf_init_br2" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:41.122 Cannot find device "nvmf_tgt_br" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:41.122 Cannot find device "nvmf_tgt_br2" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:41.122 Cannot find device "nvmf_br" 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:13:41.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:41.381 Cannot find device "nvmf_init_if" 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:41.381 Cannot find device "nvmf_init_if2" 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:41.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:13:41.381 00:13:41.381 --- 10.0.0.3 ping statistics --- 00:13:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.381 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:41.381 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:41.381 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:13:41.381 00:13:41.381 --- 10.0.0.4 ping statistics --- 00:13:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.381 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:41.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:13:41.381 00:13:41.381 --- 10.0.0.1 ping statistics --- 00:13:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.381 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:41.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:41.381 00:13:41.381 --- 10.0.0.2 ping statistics --- 00:13:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.381 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:41.381 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:41.639 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:13:41.639 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74232 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74232 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74232 ']' 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:41.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.640 20:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:41.640 [2024-11-26 20:36:55.983720] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:13:41.640 [2024-11-26 20:36:55.983799] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.640 [2024-11-26 20:36:56.123002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.640 [2024-11-26 20:36:56.161459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.640 [2024-11-26 20:36:56.161506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.640 [2024-11-26 20:36:56.161513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.640 [2024-11-26 20:36:56.161517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.640 [2024-11-26 20:36:56.161522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.640 [2024-11-26 20:36:56.162255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.640 [2024-11-26 20:36:56.162529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.640 [2024-11-26 20:36:56.162644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.897 [2024-11-26 20:36:56.197352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.462 20:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.462 20:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:13:42.462 20:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.462 20:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.462 20:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:42.462 20:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.462 20:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:42.720 [2024-11-26 20:36:57.186165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.720 20:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:42.977 Malloc0 00:13:42.977 20:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:43.543 20:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:43.543 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:43.801 [2024-11-26 20:36:58.207357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:43.801 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:44.059 [2024-11-26 20:36:58.419472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:44.059 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:13:44.317 [2024-11-26 20:36:58.639733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:13:44.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74290 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74290 /var/tmp/bdevperf.sock 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74290 ']' 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.317 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.318 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.318 20:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:45.252 20:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.253 20:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:13:45.253 20:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:45.510 NVMe0n1 00:13:45.510 20:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:45.767 00:13:45.767 20:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74313 00:13:45.767 20:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:13:45.767 20:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:46.700 20:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:46.957 20:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:13:50.347 20:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:50.347 00:13:50.347 20:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:50.605 20:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:13:53.889 20:37:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:53.889 [2024-11-26 20:37:08.213114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.889 20:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:13:54.823 20:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:13:55.082 20:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74313 00:14:01.797 { 00:14:01.797 "results": [ 00:14:01.797 { 00:14:01.797 "job": "NVMe0n1", 00:14:01.797 "core_mask": "0x1", 00:14:01.797 "workload": "verify", 00:14:01.797 "status": "finished", 00:14:01.797 "verify_range": { 00:14:01.797 "start": 0, 00:14:01.797 "length": 16384 00:14:01.797 }, 00:14:01.797 "queue_depth": 128, 00:14:01.797 "io_size": 4096, 00:14:01.797 "runtime": 15.00987, 00:14:01.797 "iops": 9821.870542516357, 00:14:01.797 "mibps": 38.36668180670452, 00:14:01.797 "io_failed": 4285, 00:14:01.797 "io_timeout": 0, 00:14:01.797 "avg_latency_us": 12635.904580216302, 00:14:01.797 "min_latency_us": 567.1384615384616, 00:14:01.797 "max_latency_us": 39321.6 00:14:01.797 } 00:14:01.797 ], 00:14:01.797 "core_count": 1 00:14:01.797 } 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74290 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74290 ']' 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74290 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74290 00:14:01.797 killing process with pid 74290 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74290' 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74290 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74290 00:14:01.797 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:01.797 [2024-11-26 20:36:58.693780] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:14:01.797 [2024-11-26 20:36:58.693875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74290 ] 00:14:01.797 [2024-11-26 20:36:58.836186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.797 [2024-11-26 20:36:58.878556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.797 [2024-11-26 20:36:58.913030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.797 Running I/O for 15 seconds... 00:14:01.797 8629.00 IOPS, 33.71 MiB/s [2024-11-26T20:37:16.352Z] [2024-11-26 20:37:01.351691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.797 [2024-11-26 20:37:01.352068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.797 [2024-11-26 20:37:01.352149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.797 [2024-11-26 20:37:01.352194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.797 [2024-11-26 20:37:01.352235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.352284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.352363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.352442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.352530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.352638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.352720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.352798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.352883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.352920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.352989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.353908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.353949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.354699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.354773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.354852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.354924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.354962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.355009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.355083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.355160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.355238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.355312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.355936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.355970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.356564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.356651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.356726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.356796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.356874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.356945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.356985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.357018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.357093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.798 [2024-11-26 20:37:01.357172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.798 [2024-11-26 20:37:01.357477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.798 [2024-11-26 20:37:01.357485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.357691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.357981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.357990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:01.358364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:01.358499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.358509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ccf10 is same with the state(6) to be set 00:14:01.799 [2024-11-26 20:37:01.358522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.799 [2024-11-26 20:37:01.358528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.799 [2024-11-26 20:37:01.358536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:14:01.799 [2024-11-26 20:37:01.358545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.362614] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:01.799 [2024-11-26 20:37:01.362734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:01.362788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.362827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:01.362866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.362899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:01.362931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.362963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:01.363004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:01.363041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:14:01.799 [2024-11-26 20:37:01.363125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185dc60 (9): Bad file descriptor 00:14:01.799 [2024-11-26 20:37:01.366631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:01.799 [2024-11-26 20:37:01.397723] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:14:01.799 9130.00 IOPS, 35.66 MiB/s [2024-11-26T20:37:16.354Z] 9478.67 IOPS, 37.03 MiB/s [2024-11-26T20:37:16.354Z] 9641.00 IOPS, 37.66 MiB/s [2024-11-26T20:37:16.354Z] [2024-11-26 20:37:04.932957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:04.933317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.933417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:04.933464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.933503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:04.933542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.933575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.799 [2024-11-26 20:37:04.933643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.933683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185dc60 is same with the state(6) to be set 00:14:01.799 [2024-11-26 20:37:04.934984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.799 [2024-11-26 20:37:04.935628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:04.935706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:04.935796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:04.935871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.799 [2024-11-26 20:37:04.935905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.799 [2024-11-26 20:37:04.935951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.935989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.936029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.936099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.936174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.936249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.936937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.936974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.937746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.937985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.937993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.938030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.938048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.800 [2024-11-26 20:37:04.938068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.800 [2024-11-26 20:37:04.938340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.800 [2024-11-26 20:37:04.938349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.938368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.938387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:04.938541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.938560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.938570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.938579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.940913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.940964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.801 [2024-11-26 20:37:04.941861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.941895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1370 is same with the state(6) to be set 00:14:01.801 [2024-11-26 20:37:04.941952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.942025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.942066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113288 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.942100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.942134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.942164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.942194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113744 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.942230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.942263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.942293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.942332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113752 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.942385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.942435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.942472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.942503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113760 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.942540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.942582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.942630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.942661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113768 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.942701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.942734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.942764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.942794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113776 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.942826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.942858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.942888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.942917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113784 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.942958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.942995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.943066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113792 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.943099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.943132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.943196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113800 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.943229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.943265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.943324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113808 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.943363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.943409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.943474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113816 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.943514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.943546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.943637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113824 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.943676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.943710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.943776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113832 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.943845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.943905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113840 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.943938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.943970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.943999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.944035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113848 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.944073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.944106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.944135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.944165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113856 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.944197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.944229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.944266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.944295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113864 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.944328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.944360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.944392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.944430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113872 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.944467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.944500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.944529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.944558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113880 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.944615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.944652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.944689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.944735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113888 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.944780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.944814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.944851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.944884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113896 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.944924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.944957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.944987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.945017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113904 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.945052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.945088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.945121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.945151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113912 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.945183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.945215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.945248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.945283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113920 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.945324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.945357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.801 [2024-11-26 20:37:04.945386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.801 [2024-11-26 20:37:04.945415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113928 len:8 PRP1 0x0 PRP2 0x0 00:14:01.801 [2024-11-26 20:37:04.945451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:04.945530] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:14:01.801 [2024-11-26 20:37:04.945581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:14:01.801 [2024-11-26 20:37:04.945679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185dc60 (9): Bad file descriptor 00:14:01.801 [2024-11-26 20:37:04.949202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:14:01.801 [2024-11-26 20:37:04.979334] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:14:01.801 9606.80 IOPS, 37.53 MiB/s [2024-11-26T20:37:16.356Z] 9691.00 IOPS, 37.86 MiB/s [2024-11-26T20:37:16.356Z] 9755.71 IOPS, 38.11 MiB/s [2024-11-26T20:37:16.356Z] 9809.25 IOPS, 38.32 MiB/s [2024-11-26T20:37:16.356Z] 9849.11 IOPS, 38.47 MiB/s [2024-11-26T20:37:16.356Z] [2024-11-26 20:37:09.441414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:09.441829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:09.441899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:09.441939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:09.441978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:09.442040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:09.442090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:09.442129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:09.442164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.801 [2024-11-26 20:37:09.442197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.801 [2024-11-26 20:37:09.442230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.442262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.442339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.442410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.442987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.442997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.443744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.443989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.443997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.444007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.444016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.444029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.444037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.444048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.802 [2024-11-26 20:37:09.444056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.444067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.444076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.444086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.444095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.444106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.802 [2024-11-26 20:37:09.444115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.802 [2024-11-26 20:37:09.444125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:01.803 [2024-11-26 20:37:09.444373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.803 [2024-11-26 20:37:09.444693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cd9c0 is same with the state(6) to be set 00:14:01.803 [2024-11-26 20:37:09.444715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93112 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93568 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93576 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93584 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93592 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93600 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93608 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93616 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.444975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.444982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93624 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.444990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.444999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93632 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93640 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93648 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93656 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93664 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93672 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93680 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93688 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93696 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93704 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93712 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93720 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93728 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93736 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93744 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:01.803 [2024-11-26 20:37:09.445471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:01.803 [2024-11-26 20:37:09.445478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93752 len:8 PRP1 0x0 PRP2 0x0 00:14:01.803 [2024-11-26 20:37:09.445486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445525] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:14:01.803 [2024-11-26 20:37:09.445567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.803 [2024-11-26 20:37:09.445579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.803 [2024-11-26 20:37:09.445612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.803 [2024-11-26 20:37:09.445630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.803 [2024-11-26 20:37:09.445644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.803 [2024-11-26 20:37:09.445652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.804 [2024-11-26 20:37:09.445661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:14:01.804 [2024-11-26 20:37:09.445698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185dc60 (9): Bad file descriptor 00:14:01.804 [2024-11-26 20:37:09.449056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:14:01.804 [2024-11-26 20:37:09.479165] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:14:01.804 9668.10 IOPS, 37.77 MiB/s [2024-11-26T20:37:16.359Z] 9711.36 IOPS, 37.94 MiB/s [2024-11-26T20:37:16.359Z] 9752.75 IOPS, 38.10 MiB/s [2024-11-26T20:37:16.359Z] 9777.31 IOPS, 38.19 MiB/s [2024-11-26T20:37:16.359Z] 9806.36 IOPS, 38.31 MiB/s [2024-11-26T20:37:16.359Z] 9821.40 IOPS, 38.36 MiB/s 00:14:01.804 Latency(us) 00:14:01.804 [2024-11-26T20:37:16.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.804 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:01.804 Verification LBA range: start 0x0 length 0x4000 00:14:01.804 NVMe0n1 : 15.01 9821.87 38.37 285.48 0.00 12635.90 567.14 39321.60 00:14:01.804 [2024-11-26T20:37:16.359Z] =================================================================================================================== 00:14:01.804 [2024-11-26T20:37:16.359Z] Total : 9821.87 38.37 285.48 0.00 12635.90 567.14 39321.60 00:14:01.804 Received shutdown signal, test time was about 15.000000 seconds 00:14:01.804 00:14:01.804 Latency(us) 00:14:01.804 [2024-11-26T20:37:16.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.804 [2024-11-26T20:37:16.359Z] =================================================================================================================== 00:14:01.804 [2024-11-26T20:37:16.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74492 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74492 /var/tmp/bdevperf.sock 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74492 ']' 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.804 20:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:02.061 20:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.061 20:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:02.061 20:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:02.061 [2024-11-26 20:37:16.581789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:02.061 20:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:02.318 [2024-11-26 20:37:16.801958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:02.318 20:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:02.576 NVMe0n1 00:14:02.576 20:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:03.140 00:14:03.140 20:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:03.398 00:14:03.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:03.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:03.654 20:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:03.654 20:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:06.937 20:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:06.937 20:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:06.937 20:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=74569 00:14:06.937 20:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 74569 00:14:06.937 20:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:08.308 { 00:14:08.308 "results": [ 00:14:08.308 { 00:14:08.308 "job": "NVMe0n1", 00:14:08.308 "core_mask": "0x1", 00:14:08.308 "workload": "verify", 00:14:08.308 "status": "finished", 00:14:08.308 "verify_range": { 00:14:08.308 "start": 0, 00:14:08.308 "length": 16384 00:14:08.308 }, 00:14:08.308 "queue_depth": 128, 00:14:08.308 "io_size": 4096, 00:14:08.308 "runtime": 1.012578, 00:14:08.308 "iops": 6858.730882954202, 00:14:08.308 "mibps": 26.79191751153985, 00:14:08.308 "io_failed": 0, 00:14:08.308 "io_timeout": 0, 00:14:08.308 "avg_latency_us": 18590.13549714792, 00:14:08.309 "min_latency_us": 2331.5692307692307, 00:14:08.309 "max_latency_us": 20265.747692307694 00:14:08.309 } 00:14:08.309 ], 00:14:08.309 "core_count": 1 00:14:08.309 } 00:14:08.309 20:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:08.309 [2024-11-26 20:37:15.494436] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:14:08.309 [2024-11-26 20:37:15.495084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74492 ] 00:14:08.309 [2024-11-26 20:37:15.627698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.309 [2024-11-26 20:37:15.672164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.309 [2024-11-26 20:37:15.713737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.309 [2024-11-26 20:37:18.188410] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:08.309 [2024-11-26 20:37:18.188549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.309 [2024-11-26 20:37:18.188565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.309 [2024-11-26 20:37:18.188577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.309 [2024-11-26 20:37:18.188587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.309 [2024-11-26 20:37:18.188608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.309 [2024-11-26 20:37:18.188617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.309 [2024-11-26 20:37:18.188627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.309 [2024-11-26 20:37:18.188635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.309 [2024-11-26 20:37:18.188645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:14:08.309 [2024-11-26 20:37:18.188685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:14:08.309 [2024-11-26 20:37:18.188707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1625c60 (9): Bad file descriptor 00:14:08.309 [2024-11-26 20:37:18.192383] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:14:08.309 Running I/O for 1 seconds... 00:14:08.309 6816.00 IOPS, 26.62 MiB/s 00:14:08.309 Latency(us) 00:14:08.309 [2024-11-26T20:37:22.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.309 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:08.309 Verification LBA range: start 0x0 length 0x4000 00:14:08.309 NVMe0n1 : 1.01 6858.73 26.79 0.00 0.00 18590.14 2331.57 20265.75 00:14:08.309 [2024-11-26T20:37:22.864Z] =================================================================================================================== 00:14:08.309 [2024-11-26T20:37:22.864Z] Total : 6858.73 26.79 0.00 0.00 18590.14 2331.57 20265.75 00:14:08.309 20:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:08.309 20:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:08.309 20:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:08.567 20:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:08.567 20:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:08.826 20:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:09.083 20:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74492 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74492 ']' 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74492 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74492 00:14:12.482 killing process with pid 74492 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74492' 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74492 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74492 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:12.482 20:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:12.740 rmmod nvme_tcp 00:14:12.740 rmmod nvme_fabrics 00:14:12.740 rmmod nvme_keyring 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74232 ']' 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74232 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74232 ']' 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74232 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74232 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:12.740 killing process with pid 74232 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74232' 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74232 00:14:12.740 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74232 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:12.999 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:14:13.257 ************************************ 00:14:13.257 END TEST nvmf_failover 00:14:13.257 ************************************ 00:14:13.257 00:14:13.257 real 0m32.251s 00:14:13.257 user 2m4.477s 00:14:13.257 sys 0m4.719s 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:13.257 ************************************ 00:14:13.257 START TEST nvmf_host_discovery 00:14:13.257 ************************************ 00:14:13.257 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:13.517 * Looking for test storage... 00:14:13.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:14:13.517 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.518 --rc genhtml_branch_coverage=1 00:14:13.518 --rc genhtml_function_coverage=1 00:14:13.518 --rc genhtml_legend=1 00:14:13.518 --rc geninfo_all_blocks=1 00:14:13.518 --rc geninfo_unexecuted_blocks=1 00:14:13.518 00:14:13.518 ' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.518 --rc genhtml_branch_coverage=1 00:14:13.518 --rc genhtml_function_coverage=1 00:14:13.518 --rc genhtml_legend=1 00:14:13.518 --rc geninfo_all_blocks=1 00:14:13.518 --rc geninfo_unexecuted_blocks=1 00:14:13.518 00:14:13.518 ' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.518 --rc genhtml_branch_coverage=1 00:14:13.518 --rc genhtml_function_coverage=1 00:14:13.518 --rc genhtml_legend=1 00:14:13.518 --rc geninfo_all_blocks=1 00:14:13.518 --rc geninfo_unexecuted_blocks=1 00:14:13.518 00:14:13.518 ' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.518 --rc genhtml_branch_coverage=1 00:14:13.518 --rc genhtml_function_coverage=1 00:14:13.518 --rc genhtml_legend=1 00:14:13.518 --rc geninfo_all_blocks=1 00:14:13.518 --rc geninfo_unexecuted_blocks=1 00:14:13.518 00:14:13.518 ' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.518 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:13.518 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:13.519 Cannot find device "nvmf_init_br" 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:13.519 Cannot find device "nvmf_init_br2" 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:13.519 Cannot find device "nvmf_tgt_br" 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.519 Cannot find device "nvmf_tgt_br2" 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:13.519 Cannot find device "nvmf_init_br" 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:13.519 Cannot find device "nvmf_init_br2" 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:13.519 Cannot find device "nvmf_tgt_br" 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:14:13.519 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:13.519 Cannot find device "nvmf_tgt_br2" 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:13.519 Cannot find device "nvmf_br" 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:13.519 Cannot find device "nvmf_init_if" 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:13.519 Cannot find device "nvmf_init_if2" 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.519 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:13.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:14:13.778 00:14:13.778 --- 10.0.0.3 ping statistics --- 00:14:13.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.778 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:13.778 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:13.778 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:13.778 00:14:13.778 --- 10.0.0.4 ping statistics --- 00:14:13.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.778 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:13.778 00:14:13.778 --- 10.0.0.1 ping statistics --- 00:14:13.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.778 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:13.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:14:13.778 00:14:13.778 --- 10.0.0.2 ping statistics --- 00:14:13.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.778 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=74887 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 74887 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 74887 ']' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.778 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.778 [2024-11-26 20:37:28.299165] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:14:13.778 [2024-11-26 20:37:28.299240] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.036 [2024-11-26 20:37:28.435880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.036 [2024-11-26 20:37:28.471996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.036 [2024-11-26 20:37:28.472038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.036 [2024-11-26 20:37:28.472045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.036 [2024-11-26 20:37:28.472051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.036 [2024-11-26 20:37:28.472055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.036 [2024-11-26 20:37:28.472319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.036 [2024-11-26 20:37:28.505608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.969 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 [2024-11-26 20:37:29.248444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 [2024-11-26 20:37:29.256536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 null0 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 null1 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=74919 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 74919 /tmp/host.sock 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 74919 ']' 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.970 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:14.970 [2024-11-26 20:37:29.330946] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:14:14.970 [2024-11-26 20:37:29.331022] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74919 ] 00:14:14.970 [2024-11-26 20:37:29.468993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.970 [2024-11-26 20:37:29.519470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.227 [2024-11-26 20:37:29.556068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:15.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:15.793 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.052 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 [2024-11-26 20:37:30.472860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:16.053 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.311 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:14:16.311 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:14:16.879 [2024-11-26 20:37:31.242904] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:16.880 [2024-11-26 20:37:31.242939] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:16.880 [2024-11-26 20:37:31.242956] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:16.880 [2024-11-26 20:37:31.248943] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:14:16.880 [2024-11-26 20:37:31.303306] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:14:16.880 [2024-11-26 20:37:31.304167] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1384e60:1 started. 00:14:16.880 [2024-11-26 20:37:31.305827] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:16.880 [2024-11-26 20:37:31.305850] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:16.880 [2024-11-26 20:37:31.311668] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1384e60 was disconnected and freed. delete nvme_qpair. 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:17.159 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.419 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.420 [2024-11-26 20:37:31.774701] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x13932f0:1 started. 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:17.420 [2024-11-26 20:37:31.781836] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x13932f0 was disconnected and freed. delete nvme_qpair. 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.420 [2024-11-26 20:37:31.861822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:17.420 [2024-11-26 20:37:31.862620] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:17.420 [2024-11-26 20:37:31.862648] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:17.420 [2024-11-26 20:37:31.868614] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:17.420 [2024-11-26 20:37:31.927500] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:14:17.420 [2024-11-26 20:37:31.927561] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:17.420 [2024-11-26 20:37:31.927569] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:17.420 [2024-11-26 20:37:31.927573] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.420 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.680 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.680 [2024-11-26 20:37:32.022754] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:17.680 [2024-11-26 20:37:32.022788] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:17.680 [2024-11-26 20:37:32.023408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.680 [2024-11-26 20:37:32.023437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.680 [2024-11-26 20:37:32.023445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.680 [2024-11-26 20:37:32.023453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.680 [2024-11-26 20:37:32.023460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.680 [2024-11-26 20:37:32.023466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.680 [2024-11-26 20:37:32.023472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.680 [2024-11-26 20:37:32.023478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.680 [2024-11-26 20:37:32.023484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361240 is same with the state(6) to be set 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:17.680 [2024-11-26 20:37:32.028760] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:14:17.680 [2024-11-26 20:37:32.028792] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:17.680 [2024-11-26 20:37:32.028837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1361240 (9): Bad file descriptor 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:17.680 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:17.681 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.939 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.875 [2024-11-26 20:37:33.293319] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:18.875 [2024-11-26 20:37:33.293349] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:18.875 [2024-11-26 20:37:33.293361] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:18.875 [2024-11-26 20:37:33.299351] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:14:18.875 [2024-11-26 20:37:33.357655] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:14:18.875 [2024-11-26 20:37:33.358337] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x136d390:1 started. 00:14:18.875 [2024-11-26 20:37:33.360257] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:18.875 [2024-11-26 20:37:33.360290] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:18.875 [2024-11-26 20:37:33.362307] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x136d390 was disconnected and freed. delete nvme_qpair. 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.875 request: 00:14:18.875 { 00:14:18.875 "name": "nvme", 00:14:18.875 "trtype": "tcp", 00:14:18.875 "traddr": "10.0.0.3", 00:14:18.875 "adrfam": "ipv4", 00:14:18.875 "trsvcid": "8009", 00:14:18.875 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:18.875 "wait_for_attach": true, 00:14:18.875 "method": "bdev_nvme_start_discovery", 00:14:18.875 "req_id": 1 00:14:18.875 } 00:14:18.875 Got JSON-RPC error response 00:14:18.875 response: 00:14:18.875 { 00:14:18.875 "code": -17, 00:14:18.875 "message": "File exists" 00:14:18.875 } 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:18.875 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.133 request: 00:14:19.133 { 00:14:19.133 "name": "nvme_second", 00:14:19.133 "trtype": "tcp", 00:14:19.133 "traddr": "10.0.0.3", 00:14:19.133 "adrfam": "ipv4", 00:14:19.133 "trsvcid": "8009", 00:14:19.133 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:19.133 "wait_for_attach": true, 00:14:19.133 "method": "bdev_nvme_start_discovery", 00:14:19.133 "req_id": 1 00:14:19.133 } 00:14:19.133 Got JSON-RPC error response 00:14:19.133 response: 00:14:19.133 { 00:14:19.133 "code": -17, 00:14:19.133 "message": "File exists" 00:14:19.133 } 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:19.133 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.134 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.118 [2024-11-26 20:37:34.540719] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:20.118 [2024-11-26 20:37:34.540779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135ff50 with addr=10.0.0.3, port=8010 00:14:20.118 [2024-11-26 20:37:34.540796] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:20.118 [2024-11-26 20:37:34.540804] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:20.119 [2024-11-26 20:37:34.540810] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:21.051 [2024-11-26 20:37:35.540715] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:21.051 [2024-11-26 20:37:35.540773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135ff50 with addr=10.0.0.3, port=8010 00:14:21.051 [2024-11-26 20:37:35.540789] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:21.051 [2024-11-26 20:37:35.540796] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:21.051 [2024-11-26 20:37:35.540802] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:22.424 [2024-11-26 20:37:36.540608] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:14:22.424 request: 00:14:22.424 { 00:14:22.424 "name": "nvme_second", 00:14:22.424 "trtype": "tcp", 00:14:22.424 "traddr": "10.0.0.3", 00:14:22.424 "adrfam": "ipv4", 00:14:22.424 "trsvcid": "8010", 00:14:22.424 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:22.424 "wait_for_attach": false, 00:14:22.424 "attach_timeout_ms": 3000, 00:14:22.424 "method": "bdev_nvme_start_discovery", 00:14:22.424 "req_id": 1 00:14:22.424 } 00:14:22.424 Got JSON-RPC error response 00:14:22.425 response: 00:14:22.425 { 00:14:22.425 "code": -110, 00:14:22.425 "message": "Connection timed out" 00:14:22.425 } 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 74919 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.425 rmmod nvme_tcp 00:14:22.425 rmmod nvme_fabrics 00:14:22.425 rmmod nvme_keyring 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 74887 ']' 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 74887 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 74887 ']' 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 74887 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74887 00:14:22.425 killing process with pid 74887 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74887' 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 74887 00:14:22.425 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 74887 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:22.682 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:14:22.682 00:14:22.682 real 0m9.471s 00:14:22.682 user 0m17.122s 00:14:22.682 sys 0m1.688s 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.682 ************************************ 00:14:22.682 END TEST nvmf_host_discovery 00:14:22.682 ************************************ 00:14:22.682 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.945 ************************************ 00:14:22.945 START TEST nvmf_host_multipath_status 00:14:22.945 ************************************ 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:22.945 * Looking for test storage... 00:14:22.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.945 --rc genhtml_branch_coverage=1 00:14:22.945 --rc genhtml_function_coverage=1 00:14:22.945 --rc genhtml_legend=1 00:14:22.945 --rc geninfo_all_blocks=1 00:14:22.945 --rc geninfo_unexecuted_blocks=1 00:14:22.945 00:14:22.945 ' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.945 --rc genhtml_branch_coverage=1 00:14:22.945 --rc genhtml_function_coverage=1 00:14:22.945 --rc genhtml_legend=1 00:14:22.945 --rc geninfo_all_blocks=1 00:14:22.945 --rc geninfo_unexecuted_blocks=1 00:14:22.945 00:14:22.945 ' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.945 --rc genhtml_branch_coverage=1 00:14:22.945 --rc genhtml_function_coverage=1 00:14:22.945 --rc genhtml_legend=1 00:14:22.945 --rc geninfo_all_blocks=1 00:14:22.945 --rc geninfo_unexecuted_blocks=1 00:14:22.945 00:14:22.945 ' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.945 --rc genhtml_branch_coverage=1 00:14:22.945 --rc genhtml_function_coverage=1 00:14:22.945 --rc genhtml_legend=1 00:14:22.945 --rc geninfo_all_blocks=1 00:14:22.945 --rc geninfo_unexecuted_blocks=1 00:14:22.945 00:14:22.945 ' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.945 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.945 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:22.946 Cannot find device "nvmf_init_br" 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:22.946 Cannot find device "nvmf_init_br2" 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:22.946 Cannot find device "nvmf_tgt_br" 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:14:22.946 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.203 Cannot find device "nvmf_tgt_br2" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:23.203 Cannot find device "nvmf_init_br" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:23.203 Cannot find device "nvmf_init_br2" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:23.203 Cannot find device "nvmf_tgt_br" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:23.203 Cannot find device "nvmf_tgt_br2" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:23.203 Cannot find device "nvmf_br" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:23.203 Cannot find device "nvmf_init_if" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:23.203 Cannot find device "nvmf_init_if2" 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:23.203 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:23.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:14:23.461 00:14:23.461 --- 10.0.0.3 ping statistics --- 00:14:23.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.461 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:23.461 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:23.461 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:14:23.461 00:14:23.461 --- 10.0.0.4 ping statistics --- 00:14:23.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.461 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:23.461 00:14:23.461 --- 10.0.0.1 ping statistics --- 00:14:23.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.461 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:23.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:23.461 00:14:23.461 --- 10.0.0.2 ping statistics --- 00:14:23.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.461 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:23.461 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=75420 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 75420 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75420 ']' 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:23.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.462 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:23.462 [2024-11-26 20:37:37.891418] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:14:23.462 [2024-11-26 20:37:37.891488] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.719 [2024-11-26 20:37:38.031776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.719 [2024-11-26 20:37:38.070355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.719 [2024-11-26 20:37:38.070405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.719 [2024-11-26 20:37:38.070413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.719 [2024-11-26 20:37:38.070419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.719 [2024-11-26 20:37:38.070424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.719 [2024-11-26 20:37:38.071164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.719 [2024-11-26 20:37:38.071466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.719 [2024-11-26 20:37:38.105460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75420 00:14:24.286 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:24.546 [2024-11-26 20:37:39.014524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.546 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:24.888 Malloc0 00:14:24.888 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:25.162 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:25.423 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:25.681 [2024-11-26 20:37:40.064360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:25.681 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:25.942 [2024-11-26 20:37:40.276423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75471 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75471 /var/tmp/bdevperf.sock 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75471 ']' 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.942 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:26.899 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.899 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:26.899 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:26.899 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:27.185 Nvme0n1 00:14:27.185 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:27.445 Nvme0n1 00:14:27.445 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:14:27.445 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:29.974 20:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:14:29.974 20:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:29.974 20:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:30.235 20:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:14:31.170 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:14:31.170 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:31.170 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:31.170 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:31.428 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:31.428 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:31.428 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:31.428 20:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:31.685 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:31.685 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:31.685 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:31.685 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:31.943 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:32.201 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:32.201 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:32.201 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:32.201 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:32.459 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:32.459 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:14:32.459 20:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:32.716 20:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:32.975 20:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:14:33.907 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:14:33.907 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:33.907 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:33.907 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:34.166 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:34.166 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:34.166 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:34.166 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:34.424 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:34.424 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:34.424 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:34.424 20:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:34.682 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:34.682 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:34.682 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:34.683 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:34.939 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:34.939 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:34.939 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:34.939 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:35.196 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:35.196 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:35.196 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:35.196 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:35.454 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:35.454 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:14:35.454 20:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:35.713 20:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:14:35.970 20:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:14:36.915 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:14:36.915 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:36.915 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:36.915 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:37.173 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:37.173 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:37.173 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.173 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:37.432 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:37.432 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:37.432 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.432 20:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:37.691 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:37.691 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:37.691 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:37.691 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.949 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:38.206 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:38.206 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:14:38.206 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:38.464 20:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:38.720 20:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.093 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:40.351 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:40.351 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:40.351 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.351 20:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:40.609 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:40.609 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:40.609 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:40.609 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.866 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:40.866 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:40.866 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.866 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:41.124 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:41.124 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:41.124 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:41.124 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:41.381 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:41.381 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:14:41.381 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:41.640 20:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:41.640 20:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.011 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:43.269 20:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.526 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:43.526 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:14:43.526 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.526 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:43.784 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:43.784 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:43.784 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.784 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:44.042 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:44.042 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:14:44.042 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:44.325 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:44.325 20:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:14:45.697 20:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:14:45.697 20:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:45.697 20:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:45.697 20:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:45.697 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:45.697 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:45.697 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:45.697 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:46.128 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.402 20:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:46.675 20:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.675 20:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:14:46.946 20:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:14:46.946 20:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:46.946 20:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:47.213 20:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:14:48.145 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:14:48.145 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:48.145 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.145 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:48.403 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:48.403 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:48.403 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.403 20:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:48.659 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:48.659 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:48.659 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:48.659 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.915 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:48.915 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:48.915 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.915 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:49.171 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.171 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:49.171 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.171 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:49.428 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.428 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:49.428 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:49.428 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.685 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.685 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:14:49.685 20:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:49.685 20:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:49.942 20:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:14:50.874 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:14:50.874 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:50.874 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:50.874 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:51.131 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:51.131 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:51.131 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.131 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:51.388 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:51.388 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:51.388 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.388 20:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:51.645 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:51.645 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:51.645 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.645 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.902 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:52.159 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:52.159 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:14:52.159 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:52.416 20:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:14:52.768 20:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:14:53.714 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:14:53.714 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:53.714 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:53.714 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:53.971 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:54.229 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.229 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:54.229 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:54.229 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.486 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.486 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:54.486 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.486 20:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:54.749 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.749 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:54.749 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.749 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:55.017 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:55.017 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:14:55.017 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:55.017 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:55.302 20:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:14:56.674 20:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:14:56.674 20:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:56.674 20:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:56.674 20:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.674 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:56.674 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:56.674 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.674 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:56.932 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:56.932 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:56.932 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:56.932 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.932 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:56.932 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:57.190 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:57.190 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:57.190 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:57.190 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:57.190 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:57.190 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:57.448 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:57.448 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:57.448 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:57.448 20:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:57.705 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:57.705 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75471 00:14:57.705 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75471 ']' 00:14:57.705 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75471 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75471 00:14:57.706 killing process with pid 75471 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75471' 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75471 00:14:57.706 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75471 00:14:57.706 { 00:14:57.706 "results": [ 00:14:57.706 { 00:14:57.706 "job": "Nvme0n1", 00:14:57.706 "core_mask": "0x4", 00:14:57.706 "workload": "verify", 00:14:57.706 "status": "terminated", 00:14:57.706 "verify_range": { 00:14:57.706 "start": 0, 00:14:57.706 "length": 16384 00:14:57.706 }, 00:14:57.706 "queue_depth": 128, 00:14:57.706 "io_size": 4096, 00:14:57.706 "runtime": 30.109654, 00:14:57.706 "iops": 10842.801448332817, 00:14:57.706 "mibps": 42.35469315755007, 00:14:57.706 "io_failed": 0, 00:14:57.706 "io_timeout": 0, 00:14:57.706 "avg_latency_us": 11781.265351503916, 00:14:57.706 "min_latency_us": 976.7384615384616, 00:14:57.706 "max_latency_us": 3019898.88 00:14:57.706 } 00:14:57.706 ], 00:14:57.706 "core_count": 1 00:14:57.706 } 00:14:57.972 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75471 00:14:57.972 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:57.972 [2024-11-26 20:37:40.335737] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:14:57.972 [2024-11-26 20:37:40.335812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75471 ] 00:14:57.972 [2024-11-26 20:37:40.475394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.972 [2024-11-26 20:37:40.526289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.972 [2024-11-26 20:37:40.559691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.972 Running I/O for 90 seconds... 00:14:57.972 7574.00 IOPS, 29.59 MiB/s [2024-11-26T20:38:12.527Z] 7626.50 IOPS, 29.79 MiB/s [2024-11-26T20:38:12.527Z] 7859.67 IOPS, 30.70 MiB/s [2024-11-26T20:38:12.527Z] 8404.25 IOPS, 32.83 MiB/s [2024-11-26T20:38:12.527Z] 8712.60 IOPS, 34.03 MiB/s [2024-11-26T20:38:12.527Z] 8725.50 IOPS, 34.08 MiB/s [2024-11-26T20:38:12.527Z] 8939.43 IOPS, 34.92 MiB/s [2024-11-26T20:38:12.527Z] 9098.88 IOPS, 35.54 MiB/s [2024-11-26T20:38:12.527Z] 9224.33 IOPS, 36.03 MiB/s [2024-11-26T20:38:12.527Z] 9249.10 IOPS, 36.13 MiB/s [2024-11-26T20:38:12.527Z] 9285.18 IOPS, 36.27 MiB/s [2024-11-26T20:38:12.527Z] 9327.42 IOPS, 36.44 MiB/s [2024-11-26T20:38:12.527Z] 9393.23 IOPS, 36.69 MiB/s [2024-11-26T20:38:12.527Z] [2024-11-26 20:37:55.932374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.972 [2024-11-26 20:37:55.932652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.972 [2024-11-26 20:37:55.932676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.972 [2024-11-26 20:37:55.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.972 [2024-11-26 20:37:55.932755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.972 [2024-11-26 20:37:55.932780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:14:57.972 [2024-11-26 20:37:55.932795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.932978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.932994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.973 [2024-11-26 20:37:55.933493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.973 [2024-11-26 20:37:55.933756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:14:57.973 [2024-11-26 20:37:55.933771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.933780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.933796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.933805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.933821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.933832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.933849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.933858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.933874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.933883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.933899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.933908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.933936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.933946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.933962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.933972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.974 [2024-11-26 20:37:55.934543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:14:57.974 [2024-11-26 20:37:55.934733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.974 [2024-11-26 20:37:55.934742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.934990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.934999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.935024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.935048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.935073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.935100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.935131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.935156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.935182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.935208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.935233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.935262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.935287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.935312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.935336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.935362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.935991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.975 [2024-11-26 20:37:55.936010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.975 [2024-11-26 20:37:55.936355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:14:57.975 [2024-11-26 20:37:55.936378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:37:55.936387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:37:55.936410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:37:55.936419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:37:55.936442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:37:55.936450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:37:55.936474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:37:55.936483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:37:55.936506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:37:55.936515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:37:55.936540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:37:55.936549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:14:57.976 9336.00 IOPS, 36.47 MiB/s [2024-11-26T20:38:12.531Z] 8713.60 IOPS, 34.04 MiB/s [2024-11-26T20:38:12.531Z] 8169.00 IOPS, 31.91 MiB/s [2024-11-26T20:38:12.531Z] 7783.12 IOPS, 30.40 MiB/s [2024-11-26T20:38:12.531Z] 7948.44 IOPS, 31.05 MiB/s [2024-11-26T20:38:12.531Z] 8231.58 IOPS, 32.15 MiB/s [2024-11-26T20:38:12.531Z] 8610.80 IOPS, 33.64 MiB/s [2024-11-26T20:38:12.531Z] 9050.14 IOPS, 35.35 MiB/s [2024-11-26T20:38:12.531Z] 9450.68 IOPS, 36.92 MiB/s [2024-11-26T20:38:12.531Z] 9637.91 IOPS, 37.65 MiB/s [2024-11-26T20:38:12.531Z] 9790.83 IOPS, 38.25 MiB/s [2024-11-26T20:38:12.531Z] 9938.04 IOPS, 38.82 MiB/s [2024-11-26T20:38:12.531Z] 10236.00 IOPS, 39.98 MiB/s [2024-11-26T20:38:12.531Z] 10511.11 IOPS, 41.06 MiB/s [2024-11-26T20:38:12.531Z] [2024-11-26 20:38:09.775979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.976 [2024-11-26 20:38:09.776663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:14:57.976 [2024-11-26 20:38:09.776676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.976 [2024-11-26 20:38:09.776683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.776696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.776703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.776716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.776723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.776735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.776743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.776756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.776763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.776777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.776784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.777875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.777893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.777908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.777916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.777929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.777943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.777956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.777964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.777992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.778000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.778194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.778220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.778261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.778281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.977 [2024-11-26 20:38:09.778361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.977 [2024-11-26 20:38:09.778401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:14:57.977 [2024-11-26 20:38:09.778414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.778421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.778434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.778441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.778454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.778461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.778478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.778486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.778500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.778507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.779338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.779358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.779985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.779993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.978 [2024-11-26 20:38:09.780386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.978 [2024-11-26 20:38:09.780426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:14:57.978 [2024-11-26 20:38:09.780438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.780458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.780469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.780483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.780493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.780507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.780515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.780528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.780535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.780548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.780556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.780569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.780576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.781952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.781965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.781980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.782783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.782805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.782826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.782847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.782869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.979 [2024-11-26 20:38:09.782888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.782908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:57.979 [2024-11-26 20:38:09.782921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.979 [2024-11-26 20:38:09.782946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.782959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.782966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.782979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.782986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.782999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.783006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.783026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.783055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.783076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.783116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.783135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.783155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.783175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.783196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.783220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.783241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.783261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.783274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.783281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.980 [2024-11-26 20:38:09.784775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:14:57.980 [2024-11-26 20:38:09.784789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.980 [2024-11-26 20:38:09.784796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.784816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.784836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.784856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.784878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.784898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.784938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.784959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.784978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.784991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.784998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.785011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.785022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.785035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.785042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.785054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.785061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.785074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.785081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.785094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.785101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.981 [2024-11-26 20:38:09.786654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.981 [2024-11-26 20:38:09.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:14:57.981 [2024-11-26 20:38:09.786687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.786694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.786707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.786714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.786731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.786739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.786752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.786759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.786772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.786779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.786792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.786799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.787850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.787872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.787893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.787913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.787932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.787952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.787972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.787985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.787992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.982 [2024-11-26 20:38:09.788483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:57.982 [2024-11-26 20:38:09.788496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.982 [2024-11-26 20:38:09.788503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.788520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.788527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.789742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.789764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.789784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.789804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.789823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.789843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.789863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.789883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.789903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.789915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.789923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.983 [2024-11-26 20:38:09.790822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.790900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.983 [2024-11-26 20:38:09.790907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:14:57.983 [2024-11-26 20:38:09.791817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.791833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.791854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.791874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.791894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.791914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.791934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.791953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.791973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.791986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.791993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.792013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.792098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.792844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.792864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.792903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.792924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.792937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.792944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.793233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.793255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.793275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.793295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.793314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.793339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.984 [2024-11-26 20:38:09.793360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.793379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.984 [2024-11-26 20:38:09.793399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:14:57.984 [2024-11-26 20:38:09.793414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.793421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.793441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.793461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.793481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.793501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.793520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.793540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.793560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.793580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.793615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.793628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.793635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.794697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.794719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.794740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.794760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.794780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.794800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.794819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.794839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.794860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.794879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.794906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.794919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.794926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.985 [2024-11-26 20:38:09.795888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.985 [2024-11-26 20:38:09.795916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:14:57.985 [2024-11-26 20:38:09.795929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.795936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.795949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.795956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.795969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.795976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.795989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.795996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.796009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.796016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.796029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.796036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.796049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.796056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.796073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.796080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.797158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.797190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.797210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.797231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.797250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:57.986 [2024-11-26 20:38:09.797270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.797290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:14:57.986 [2024-11-26 20:38:09.797303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.986 [2024-11-26 20:38:09.797310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:57.986 10706.32 IOPS, 41.82 MiB/s [2024-11-26T20:38:12.541Z] 10776.03 IOPS, 42.09 MiB/s [2024-11-26T20:38:12.541Z] 10840.30 IOPS, 42.34 MiB/s [2024-11-26T20:38:12.541Z] Received shutdown signal, test time was about 30.110328 seconds 00:14:57.986 00:14:57.986 Latency(us) 00:14:57.986 [2024-11-26T20:38:12.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.986 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:57.986 Verification LBA range: start 0x0 length 0x4000 00:14:57.986 Nvme0n1 : 30.11 10842.80 42.35 0.00 0.00 11781.27 976.74 3019898.88 00:14:57.986 [2024-11-26T20:38:12.541Z] =================================================================================================================== 00:14:57.986 [2024-11-26T20:38:12.541Z] Total : 10842.80 42.35 0.00 0.00 11781.27 976.74 3019898.88 00:14:57.986 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.986 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:14:57.986 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:57.986 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:14:57.986 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:57.986 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.244 rmmod nvme_tcp 00:14:58.244 rmmod nvme_fabrics 00:14:58.244 rmmod nvme_keyring 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 75420 ']' 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 75420 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75420 ']' 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75420 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75420 00:14:58.244 killing process with pid 75420 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75420' 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75420 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75420 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:58.244 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:14:58.502 00:14:58.502 real 0m35.703s 00:14:58.502 user 1m54.358s 00:14:58.502 sys 0m8.864s 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.502 ************************************ 00:14:58.502 END TEST nvmf_host_multipath_status 00:14:58.502 ************************************ 00:14:58.502 20:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:58.502 20:38:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:58.502 20:38:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.502 20:38:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.502 20:38:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.502 ************************************ 00:14:58.502 START TEST nvmf_discovery_remove_ifc 00:14:58.502 ************************************ 00:14:58.502 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:58.761 * Looking for test storage... 00:14:58.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:58.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.761 --rc genhtml_branch_coverage=1 00:14:58.761 --rc genhtml_function_coverage=1 00:14:58.761 --rc genhtml_legend=1 00:14:58.761 --rc geninfo_all_blocks=1 00:14:58.761 --rc geninfo_unexecuted_blocks=1 00:14:58.761 00:14:58.761 ' 00:14:58.761 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:58.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.761 --rc genhtml_branch_coverage=1 00:14:58.761 --rc genhtml_function_coverage=1 00:14:58.761 --rc genhtml_legend=1 00:14:58.762 --rc geninfo_all_blocks=1 00:14:58.762 --rc geninfo_unexecuted_blocks=1 00:14:58.762 00:14:58.762 ' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:58.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.762 --rc genhtml_branch_coverage=1 00:14:58.762 --rc genhtml_function_coverage=1 00:14:58.762 --rc genhtml_legend=1 00:14:58.762 --rc geninfo_all_blocks=1 00:14:58.762 --rc geninfo_unexecuted_blocks=1 00:14:58.762 00:14:58.762 ' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:58.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.762 --rc genhtml_branch_coverage=1 00:14:58.762 --rc genhtml_function_coverage=1 00:14:58.762 --rc genhtml_legend=1 00:14:58.762 --rc geninfo_all_blocks=1 00:14:58.762 --rc geninfo_unexecuted_blocks=1 00:14:58.762 00:14:58.762 ' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.762 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:58.762 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:58.763 Cannot find device "nvmf_init_br" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:58.763 Cannot find device "nvmf_init_br2" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:58.763 Cannot find device "nvmf_tgt_br" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.763 Cannot find device "nvmf_tgt_br2" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:58.763 Cannot find device "nvmf_init_br" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:58.763 Cannot find device "nvmf_init_br2" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:58.763 Cannot find device "nvmf_tgt_br" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:58.763 Cannot find device "nvmf_tgt_br2" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:58.763 Cannot find device "nvmf_br" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:58.763 Cannot find device "nvmf_init_if" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:58.763 Cannot find device "nvmf_init_if2" 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:58.763 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.021 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.021 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.021 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.021 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:59.021 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:59.021 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:59.022 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.022 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:59.022 00:14:59.022 --- 10.0.0.3 ping statistics --- 00:14:59.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.022 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:59.022 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:59.022 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:59.022 00:14:59.022 --- 10.0.0.4 ping statistics --- 00:14:59.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.022 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:59.022 00:14:59.022 --- 10.0.0.1 ping statistics --- 00:14:59.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.022 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:59.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:59.022 00:14:59.022 --- 10.0.0.2 ping statistics --- 00:14:59.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.022 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=76278 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 76278 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76278 ']' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.022 20:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:59.022 [2024-11-26 20:38:13.510176] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:14:59.022 [2024-11-26 20:38:13.510230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.280 [2024-11-26 20:38:13.650008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.280 [2024-11-26 20:38:13.685360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.280 [2024-11-26 20:38:13.685401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.280 [2024-11-26 20:38:13.685408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.280 [2024-11-26 20:38:13.685413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.280 [2024-11-26 20:38:13.685417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.280 [2024-11-26 20:38:13.685681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.280 [2024-11-26 20:38:13.717727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:59.846 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.846 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:14:59.846 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:59.846 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:59.846 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:00.104 [2024-11-26 20:38:14.418529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.104 [2024-11-26 20:38:14.426619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:00.104 null0 00:15:00.104 [2024-11-26 20:38:14.458553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76310 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76310 /tmp/host.sock 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76310 ']' 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:00.104 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.104 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:00.105 20:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:00.105 [2024-11-26 20:38:14.512520] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:15:00.105 [2024-11-26 20:38:14.512570] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76310 ] 00:15:00.105 [2024-11-26 20:38:14.647226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.363 [2024-11-26 20:38:14.680165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:00.929 [2024-11-26 20:38:15.423661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.929 20:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:02.303 [2024-11-26 20:38:16.471153] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:02.303 [2024-11-26 20:38:16.471181] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:02.303 [2024-11-26 20:38:16.471194] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:02.303 [2024-11-26 20:38:16.477183] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:02.303 [2024-11-26 20:38:16.531447] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:02.303 [2024-11-26 20:38:16.532135] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xfea000:1 started. 00:15:02.303 [2024-11-26 20:38:16.533367] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:02.303 [2024-11-26 20:38:16.533410] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:02.303 [2024-11-26 20:38:16.533425] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:02.303 [2024-11-26 20:38:16.533436] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:02.303 [2024-11-26 20:38:16.533454] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:02.303 [2024-11-26 20:38:16.539885] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xfea000 was disconnected and freed. delete nvme_qpair. 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:02.303 20:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:03.242 20:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:04.224 20:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:05.154 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:05.155 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:05.155 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.155 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:05.155 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.155 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:05.155 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:05.411 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.411 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:05.411 20:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:06.341 20:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:07.273 20:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:07.530 [2024-11-26 20:38:21.971872] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:07.530 [2024-11-26 20:38:21.971936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.530 [2024-11-26 20:38:21.971944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.530 [2024-11-26 20:38:21.971953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.530 [2024-11-26 20:38:21.971958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.530 [2024-11-26 20:38:21.971963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.530 [2024-11-26 20:38:21.971968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.530 [2024-11-26 20:38:21.971974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.530 [2024-11-26 20:38:21.971979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.530 [2024-11-26 20:38:21.971984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.530 [2024-11-26 20:38:21.971989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.530 [2024-11-26 20:38:21.971994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc6250 is same with the state(6) to be set 00:15:07.530 [2024-11-26 20:38:21.981866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc6250 (9): Bad file descriptor 00:15:07.530 [2024-11-26 20:38:21.991883] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:15:07.530 [2024-11-26 20:38:21.991896] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:15:07.530 [2024-11-26 20:38:21.991899] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:07.530 [2024-11-26 20:38:21.991902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:07.530 [2024-11-26 20:38:21.991927] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:08.463 20:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:08.463 20:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.463 20:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:08.463 20:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.463 20:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:08.463 20:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:08.463 20:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:08.721 [2024-11-26 20:38:23.047674] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:08.721 [2024-11-26 20:38:23.047801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc6250 with addr=10.0.0.3, port=4420 00:15:08.721 [2024-11-26 20:38:23.047830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc6250 is same with the state(6) to be set 00:15:08.722 [2024-11-26 20:38:23.047893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc6250 (9): Bad file descriptor 00:15:08.722 [2024-11-26 20:38:23.049054] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:15:08.722 [2024-11-26 20:38:23.049131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:08.722 [2024-11-26 20:38:23.049150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:08.722 [2024-11-26 20:38:23.049169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:08.722 [2024-11-26 20:38:23.049186] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:08.722 [2024-11-26 20:38:23.049199] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:08.722 [2024-11-26 20:38:23.049209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:08.722 [2024-11-26 20:38:23.049228] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:08.722 [2024-11-26 20:38:23.049238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:08.722 20:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.722 20:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:08.722 20:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:09.656 [2024-11-26 20:38:24.049318] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:09.656 [2024-11-26 20:38:24.049356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:09.656 [2024-11-26 20:38:24.049376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:09.656 [2024-11-26 20:38:24.049381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:09.656 [2024-11-26 20:38:24.049387] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:15:09.656 [2024-11-26 20:38:24.049391] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:09.656 [2024-11-26 20:38:24.049395] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:09.656 [2024-11-26 20:38:24.049399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:09.656 [2024-11-26 20:38:24.049422] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:15:09.656 [2024-11-26 20:38:24.049455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.656 [2024-11-26 20:38:24.049464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.656 [2024-11-26 20:38:24.049471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.656 [2024-11-26 20:38:24.049476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.656 [2024-11-26 20:38:24.049480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.656 [2024-11-26 20:38:24.049485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.656 [2024-11-26 20:38:24.049490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.656 [2024-11-26 20:38:24.049494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.656 [2024-11-26 20:38:24.049499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.656 [2024-11-26 20:38:24.049503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.656 [2024-11-26 20:38:24.049508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:15:09.656 [2024-11-26 20:38:24.049882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf51a20 (9): Bad file descriptor 00:15:09.656 [2024-11-26 20:38:24.050890] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:09.656 [2024-11-26 20:38:24.050902] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:09.656 20:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:11.030 20:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:11.610 [2024-11-26 20:38:26.054378] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:11.610 [2024-11-26 20:38:26.054408] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:11.610 [2024-11-26 20:38:26.054418] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:11.610 [2024-11-26 20:38:26.060402] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:15:11.610 [2024-11-26 20:38:26.114631] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:15:11.610 [2024-11-26 20:38:26.115152] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xfd1d80:1 started. 00:15:11.610 [2024-11-26 20:38:26.116107] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:11.610 [2024-11-26 20:38:26.116138] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:11.610 [2024-11-26 20:38:26.116153] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:11.610 [2024-11-26 20:38:26.116164] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:15:11.610 [2024-11-26 20:38:26.116170] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:11.610 [2024-11-26 20:38:26.123149] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xfd1d80 was disconnected and freed. delete nvme_qpair. 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76310 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76310 ']' 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76310 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76310 00:15:11.868 killing process with pid 76310 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76310' 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76310 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76310 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:11.868 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.125 rmmod nvme_tcp 00:15:12.125 rmmod nvme_fabrics 00:15:12.125 rmmod nvme_keyring 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 76278 ']' 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 76278 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76278 ']' 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76278 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76278 00:15:12.125 killing process with pid 76278 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76278' 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76278 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76278 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:12.125 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:15:12.384 00:15:12.384 real 0m13.799s 00:15:12.384 user 0m23.678s 00:15:12.384 sys 0m1.966s 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:12.384 ************************************ 00:15:12.384 END TEST nvmf_discovery_remove_ifc 00:15:12.384 ************************************ 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.384 ************************************ 00:15:12.384 START TEST nvmf_identify_kernel_target 00:15:12.384 ************************************ 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:12.384 * Looking for test storage... 00:15:12.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:12.384 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:15:12.642 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:12.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.643 --rc genhtml_branch_coverage=1 00:15:12.643 --rc genhtml_function_coverage=1 00:15:12.643 --rc genhtml_legend=1 00:15:12.643 --rc geninfo_all_blocks=1 00:15:12.643 --rc geninfo_unexecuted_blocks=1 00:15:12.643 00:15:12.643 ' 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:12.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.643 --rc genhtml_branch_coverage=1 00:15:12.643 --rc genhtml_function_coverage=1 00:15:12.643 --rc genhtml_legend=1 00:15:12.643 --rc geninfo_all_blocks=1 00:15:12.643 --rc geninfo_unexecuted_blocks=1 00:15:12.643 00:15:12.643 ' 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:12.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.643 --rc genhtml_branch_coverage=1 00:15:12.643 --rc genhtml_function_coverage=1 00:15:12.643 --rc genhtml_legend=1 00:15:12.643 --rc geninfo_all_blocks=1 00:15:12.643 --rc geninfo_unexecuted_blocks=1 00:15:12.643 00:15:12.643 ' 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:12.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.643 --rc genhtml_branch_coverage=1 00:15:12.643 --rc genhtml_function_coverage=1 00:15:12.643 --rc genhtml_legend=1 00:15:12.643 --rc geninfo_all_blocks=1 00:15:12.643 --rc geninfo_unexecuted_blocks=1 00:15:12.643 00:15:12.643 ' 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.643 20:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.643 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:12.643 Cannot find device "nvmf_init_br" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:12.643 Cannot find device "nvmf_init_br2" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:12.643 Cannot find device "nvmf_tgt_br" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.643 Cannot find device "nvmf_tgt_br2" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:12.643 Cannot find device "nvmf_init_br" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:12.643 Cannot find device "nvmf_init_br2" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:12.643 Cannot find device "nvmf_tgt_br" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:12.643 Cannot find device "nvmf_tgt_br2" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:12.643 Cannot find device "nvmf_br" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:12.643 Cannot find device "nvmf_init_if" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:12.643 Cannot find device "nvmf_init_if2" 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:12.643 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:12.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:12.901 00:15:12.901 --- 10.0.0.3 ping statistics --- 00:15:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.901 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:12.901 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:12.901 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:12.901 00:15:12.901 --- 10.0.0.4 ping statistics --- 00:15:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.901 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:12.901 00:15:12.901 --- 10.0.0.1 ping statistics --- 00:15:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.901 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:12.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:15:12.901 00:15:12.901 --- 10.0.0.2 ping statistics --- 00:15:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.901 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.901 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:12.902 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:13.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:13.160 Waiting for block devices as requested 00:15:13.160 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:13.160 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:13.418 No valid GPT data, bailing 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:13.418 No valid GPT data, bailing 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:13.418 No valid GPT data, bailing 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:13.418 No valid GPT data, bailing 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:15:13.418 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:13.676 20:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -a 10.0.0.1 -t tcp -s 4420 00:15:13.676 00:15:13.676 Discovery Log Number of Records 2, Generation counter 2 00:15:13.676 =====Discovery Log Entry 0====== 00:15:13.676 trtype: tcp 00:15:13.676 adrfam: ipv4 00:15:13.676 subtype: current discovery subsystem 00:15:13.676 treq: not specified, sq flow control disable supported 00:15:13.676 portid: 1 00:15:13.676 trsvcid: 4420 00:15:13.676 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:13.676 traddr: 10.0.0.1 00:15:13.676 eflags: none 00:15:13.676 sectype: none 00:15:13.676 =====Discovery Log Entry 1====== 00:15:13.676 trtype: tcp 00:15:13.676 adrfam: ipv4 00:15:13.676 subtype: nvme subsystem 00:15:13.676 treq: not specified, sq flow control disable supported 00:15:13.676 portid: 1 00:15:13.676 trsvcid: 4420 00:15:13.676 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:13.676 traddr: 10.0.0.1 00:15:13.676 eflags: none 00:15:13.676 sectype: none 00:15:13.676 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:13.676 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:13.676 ===================================================== 00:15:13.676 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:13.676 ===================================================== 00:15:13.676 Controller Capabilities/Features 00:15:13.676 ================================ 00:15:13.676 Vendor ID: 0000 00:15:13.676 Subsystem Vendor ID: 0000 00:15:13.676 Serial Number: 52fed8ed5259f6069f16 00:15:13.676 Model Number: Linux 00:15:13.676 Firmware Version: 6.8.9-20 00:15:13.676 Recommended Arb Burst: 0 00:15:13.676 IEEE OUI Identifier: 00 00 00 00:15:13.676 Multi-path I/O 00:15:13.676 May have multiple subsystem ports: No 00:15:13.676 May have multiple controllers: No 00:15:13.676 Associated with SR-IOV VF: No 00:15:13.676 Max Data Transfer Size: Unlimited 00:15:13.676 Max Number of Namespaces: 0 00:15:13.676 Max Number of I/O Queues: 1024 00:15:13.676 NVMe Specification Version (VS): 1.3 00:15:13.676 NVMe Specification Version (Identify): 1.3 00:15:13.676 Maximum Queue Entries: 1024 00:15:13.676 Contiguous Queues Required: No 00:15:13.676 Arbitration Mechanisms Supported 00:15:13.676 Weighted Round Robin: Not Supported 00:15:13.676 Vendor Specific: Not Supported 00:15:13.676 Reset Timeout: 7500 ms 00:15:13.676 Doorbell Stride: 4 bytes 00:15:13.676 NVM Subsystem Reset: Not Supported 00:15:13.676 Command Sets Supported 00:15:13.676 NVM Command Set: Supported 00:15:13.676 Boot Partition: Not Supported 00:15:13.676 Memory Page Size Minimum: 4096 bytes 00:15:13.676 Memory Page Size Maximum: 4096 bytes 00:15:13.676 Persistent Memory Region: Not Supported 00:15:13.676 Optional Asynchronous Events Supported 00:15:13.676 Namespace Attribute Notices: Not Supported 00:15:13.676 Firmware Activation Notices: Not Supported 00:15:13.676 ANA Change Notices: Not Supported 00:15:13.676 PLE Aggregate Log Change Notices: Not Supported 00:15:13.676 LBA Status Info Alert Notices: Not Supported 00:15:13.676 EGE Aggregate Log Change Notices: Not Supported 00:15:13.676 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.676 Zone Descriptor Change Notices: Not Supported 00:15:13.676 Discovery Log Change Notices: Supported 00:15:13.676 Controller Attributes 00:15:13.676 128-bit Host Identifier: Not Supported 00:15:13.676 Non-Operational Permissive Mode: Not Supported 00:15:13.676 NVM Sets: Not Supported 00:15:13.676 Read Recovery Levels: Not Supported 00:15:13.676 Endurance Groups: Not Supported 00:15:13.676 Predictable Latency Mode: Not Supported 00:15:13.676 Traffic Based Keep ALive: Not Supported 00:15:13.676 Namespace Granularity: Not Supported 00:15:13.676 SQ Associations: Not Supported 00:15:13.676 UUID List: Not Supported 00:15:13.676 Multi-Domain Subsystem: Not Supported 00:15:13.676 Fixed Capacity Management: Not Supported 00:15:13.676 Variable Capacity Management: Not Supported 00:15:13.676 Delete Endurance Group: Not Supported 00:15:13.676 Delete NVM Set: Not Supported 00:15:13.676 Extended LBA Formats Supported: Not Supported 00:15:13.676 Flexible Data Placement Supported: Not Supported 00:15:13.676 00:15:13.676 Controller Memory Buffer Support 00:15:13.676 ================================ 00:15:13.676 Supported: No 00:15:13.676 00:15:13.676 Persistent Memory Region Support 00:15:13.676 ================================ 00:15:13.676 Supported: No 00:15:13.676 00:15:13.676 Admin Command Set Attributes 00:15:13.676 ============================ 00:15:13.676 Security Send/Receive: Not Supported 00:15:13.676 Format NVM: Not Supported 00:15:13.676 Firmware Activate/Download: Not Supported 00:15:13.676 Namespace Management: Not Supported 00:15:13.676 Device Self-Test: Not Supported 00:15:13.676 Directives: Not Supported 00:15:13.676 NVMe-MI: Not Supported 00:15:13.676 Virtualization Management: Not Supported 00:15:13.676 Doorbell Buffer Config: Not Supported 00:15:13.676 Get LBA Status Capability: Not Supported 00:15:13.676 Command & Feature Lockdown Capability: Not Supported 00:15:13.676 Abort Command Limit: 1 00:15:13.676 Async Event Request Limit: 1 00:15:13.677 Number of Firmware Slots: N/A 00:15:13.677 Firmware Slot 1 Read-Only: N/A 00:15:13.677 Firmware Activation Without Reset: N/A 00:15:13.677 Multiple Update Detection Support: N/A 00:15:13.677 Firmware Update Granularity: No Information Provided 00:15:13.677 Per-Namespace SMART Log: No 00:15:13.677 Asymmetric Namespace Access Log Page: Not Supported 00:15:13.677 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:13.677 Command Effects Log Page: Not Supported 00:15:13.677 Get Log Page Extended Data: Supported 00:15:13.677 Telemetry Log Pages: Not Supported 00:15:13.677 Persistent Event Log Pages: Not Supported 00:15:13.677 Supported Log Pages Log Page: May Support 00:15:13.677 Commands Supported & Effects Log Page: Not Supported 00:15:13.677 Feature Identifiers & Effects Log Page:May Support 00:15:13.677 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.677 Data Area 4 for Telemetry Log: Not Supported 00:15:13.677 Error Log Page Entries Supported: 1 00:15:13.677 Keep Alive: Not Supported 00:15:13.677 00:15:13.677 NVM Command Set Attributes 00:15:13.677 ========================== 00:15:13.677 Submission Queue Entry Size 00:15:13.677 Max: 1 00:15:13.677 Min: 1 00:15:13.677 Completion Queue Entry Size 00:15:13.677 Max: 1 00:15:13.677 Min: 1 00:15:13.677 Number of Namespaces: 0 00:15:13.677 Compare Command: Not Supported 00:15:13.677 Write Uncorrectable Command: Not Supported 00:15:13.677 Dataset Management Command: Not Supported 00:15:13.677 Write Zeroes Command: Not Supported 00:15:13.677 Set Features Save Field: Not Supported 00:15:13.677 Reservations: Not Supported 00:15:13.677 Timestamp: Not Supported 00:15:13.677 Copy: Not Supported 00:15:13.677 Volatile Write Cache: Not Present 00:15:13.677 Atomic Write Unit (Normal): 1 00:15:13.677 Atomic Write Unit (PFail): 1 00:15:13.677 Atomic Compare & Write Unit: 1 00:15:13.677 Fused Compare & Write: Not Supported 00:15:13.677 Scatter-Gather List 00:15:13.677 SGL Command Set: Supported 00:15:13.677 SGL Keyed: Not Supported 00:15:13.677 SGL Bit Bucket Descriptor: Not Supported 00:15:13.677 SGL Metadata Pointer: Not Supported 00:15:13.677 Oversized SGL: Not Supported 00:15:13.677 SGL Metadata Address: Not Supported 00:15:13.677 SGL Offset: Supported 00:15:13.677 Transport SGL Data Block: Not Supported 00:15:13.677 Replay Protected Memory Block: Not Supported 00:15:13.677 00:15:13.677 Firmware Slot Information 00:15:13.677 ========================= 00:15:13.677 Active slot: 0 00:15:13.677 00:15:13.677 00:15:13.677 Error Log 00:15:13.677 ========= 00:15:13.677 00:15:13.677 Active Namespaces 00:15:13.677 ================= 00:15:13.677 Discovery Log Page 00:15:13.677 ================== 00:15:13.677 Generation Counter: 2 00:15:13.677 Number of Records: 2 00:15:13.677 Record Format: 0 00:15:13.677 00:15:13.677 Discovery Log Entry 0 00:15:13.677 ---------------------- 00:15:13.677 Transport Type: 3 (TCP) 00:15:13.677 Address Family: 1 (IPv4) 00:15:13.677 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:13.677 Entry Flags: 00:15:13.677 Duplicate Returned Information: 0 00:15:13.677 Explicit Persistent Connection Support for Discovery: 0 00:15:13.677 Transport Requirements: 00:15:13.677 Secure Channel: Not Specified 00:15:13.677 Port ID: 1 (0x0001) 00:15:13.677 Controller ID: 65535 (0xffff) 00:15:13.677 Admin Max SQ Size: 32 00:15:13.677 Transport Service Identifier: 4420 00:15:13.677 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:13.677 Transport Address: 10.0.0.1 00:15:13.677 Discovery Log Entry 1 00:15:13.677 ---------------------- 00:15:13.677 Transport Type: 3 (TCP) 00:15:13.677 Address Family: 1 (IPv4) 00:15:13.677 Subsystem Type: 2 (NVM Subsystem) 00:15:13.677 Entry Flags: 00:15:13.677 Duplicate Returned Information: 0 00:15:13.677 Explicit Persistent Connection Support for Discovery: 0 00:15:13.677 Transport Requirements: 00:15:13.677 Secure Channel: Not Specified 00:15:13.677 Port ID: 1 (0x0001) 00:15:13.677 Controller ID: 65535 (0xffff) 00:15:13.677 Admin Max SQ Size: 32 00:15:13.677 Transport Service Identifier: 4420 00:15:13.677 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:13.677 Transport Address: 10.0.0.1 00:15:13.677 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:13.935 get_feature(0x01) failed 00:15:13.935 get_feature(0x02) failed 00:15:13.935 get_feature(0x04) failed 00:15:13.935 ===================================================== 00:15:13.935 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:13.935 ===================================================== 00:15:13.935 Controller Capabilities/Features 00:15:13.935 ================================ 00:15:13.935 Vendor ID: 0000 00:15:13.935 Subsystem Vendor ID: 0000 00:15:13.935 Serial Number: 7cbabec74ee3b00bdd9a 00:15:13.935 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:13.935 Firmware Version: 6.8.9-20 00:15:13.935 Recommended Arb Burst: 6 00:15:13.935 IEEE OUI Identifier: 00 00 00 00:15:13.935 Multi-path I/O 00:15:13.935 May have multiple subsystem ports: Yes 00:15:13.935 May have multiple controllers: Yes 00:15:13.935 Associated with SR-IOV VF: No 00:15:13.935 Max Data Transfer Size: Unlimited 00:15:13.935 Max Number of Namespaces: 1024 00:15:13.935 Max Number of I/O Queues: 128 00:15:13.935 NVMe Specification Version (VS): 1.3 00:15:13.935 NVMe Specification Version (Identify): 1.3 00:15:13.935 Maximum Queue Entries: 1024 00:15:13.935 Contiguous Queues Required: No 00:15:13.935 Arbitration Mechanisms Supported 00:15:13.935 Weighted Round Robin: Not Supported 00:15:13.935 Vendor Specific: Not Supported 00:15:13.935 Reset Timeout: 7500 ms 00:15:13.935 Doorbell Stride: 4 bytes 00:15:13.935 NVM Subsystem Reset: Not Supported 00:15:13.935 Command Sets Supported 00:15:13.935 NVM Command Set: Supported 00:15:13.935 Boot Partition: Not Supported 00:15:13.935 Memory Page Size Minimum: 4096 bytes 00:15:13.935 Memory Page Size Maximum: 4096 bytes 00:15:13.935 Persistent Memory Region: Not Supported 00:15:13.935 Optional Asynchronous Events Supported 00:15:13.935 Namespace Attribute Notices: Supported 00:15:13.935 Firmware Activation Notices: Not Supported 00:15:13.935 ANA Change Notices: Supported 00:15:13.935 PLE Aggregate Log Change Notices: Not Supported 00:15:13.935 LBA Status Info Alert Notices: Not Supported 00:15:13.935 EGE Aggregate Log Change Notices: Not Supported 00:15:13.935 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.935 Zone Descriptor Change Notices: Not Supported 00:15:13.935 Discovery Log Change Notices: Not Supported 00:15:13.935 Controller Attributes 00:15:13.935 128-bit Host Identifier: Supported 00:15:13.935 Non-Operational Permissive Mode: Not Supported 00:15:13.935 NVM Sets: Not Supported 00:15:13.935 Read Recovery Levels: Not Supported 00:15:13.935 Endurance Groups: Not Supported 00:15:13.935 Predictable Latency Mode: Not Supported 00:15:13.935 Traffic Based Keep ALive: Supported 00:15:13.935 Namespace Granularity: Not Supported 00:15:13.935 SQ Associations: Not Supported 00:15:13.935 UUID List: Not Supported 00:15:13.935 Multi-Domain Subsystem: Not Supported 00:15:13.935 Fixed Capacity Management: Not Supported 00:15:13.935 Variable Capacity Management: Not Supported 00:15:13.935 Delete Endurance Group: Not Supported 00:15:13.935 Delete NVM Set: Not Supported 00:15:13.935 Extended LBA Formats Supported: Not Supported 00:15:13.936 Flexible Data Placement Supported: Not Supported 00:15:13.936 00:15:13.936 Controller Memory Buffer Support 00:15:13.936 ================================ 00:15:13.936 Supported: No 00:15:13.936 00:15:13.936 Persistent Memory Region Support 00:15:13.936 ================================ 00:15:13.936 Supported: No 00:15:13.936 00:15:13.936 Admin Command Set Attributes 00:15:13.936 ============================ 00:15:13.936 Security Send/Receive: Not Supported 00:15:13.936 Format NVM: Not Supported 00:15:13.936 Firmware Activate/Download: Not Supported 00:15:13.936 Namespace Management: Not Supported 00:15:13.936 Device Self-Test: Not Supported 00:15:13.936 Directives: Not Supported 00:15:13.936 NVMe-MI: Not Supported 00:15:13.936 Virtualization Management: Not Supported 00:15:13.936 Doorbell Buffer Config: Not Supported 00:15:13.936 Get LBA Status Capability: Not Supported 00:15:13.936 Command & Feature Lockdown Capability: Not Supported 00:15:13.936 Abort Command Limit: 4 00:15:13.936 Async Event Request Limit: 4 00:15:13.936 Number of Firmware Slots: N/A 00:15:13.936 Firmware Slot 1 Read-Only: N/A 00:15:13.936 Firmware Activation Without Reset: N/A 00:15:13.936 Multiple Update Detection Support: N/A 00:15:13.936 Firmware Update Granularity: No Information Provided 00:15:13.936 Per-Namespace SMART Log: Yes 00:15:13.936 Asymmetric Namespace Access Log Page: Supported 00:15:13.936 ANA Transition Time : 10 sec 00:15:13.936 00:15:13.936 Asymmetric Namespace Access Capabilities 00:15:13.936 ANA Optimized State : Supported 00:15:13.936 ANA Non-Optimized State : Supported 00:15:13.936 ANA Inaccessible State : Supported 00:15:13.936 ANA Persistent Loss State : Supported 00:15:13.936 ANA Change State : Supported 00:15:13.936 ANAGRPID is not changed : No 00:15:13.936 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:13.936 00:15:13.936 ANA Group Identifier Maximum : 128 00:15:13.936 Number of ANA Group Identifiers : 128 00:15:13.936 Max Number of Allowed Namespaces : 1024 00:15:13.936 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:13.936 Command Effects Log Page: Supported 00:15:13.936 Get Log Page Extended Data: Supported 00:15:13.936 Telemetry Log Pages: Not Supported 00:15:13.936 Persistent Event Log Pages: Not Supported 00:15:13.936 Supported Log Pages Log Page: May Support 00:15:13.936 Commands Supported & Effects Log Page: Not Supported 00:15:13.936 Feature Identifiers & Effects Log Page:May Support 00:15:13.936 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.936 Data Area 4 for Telemetry Log: Not Supported 00:15:13.936 Error Log Page Entries Supported: 128 00:15:13.936 Keep Alive: Supported 00:15:13.936 Keep Alive Granularity: 1000 ms 00:15:13.936 00:15:13.936 NVM Command Set Attributes 00:15:13.936 ========================== 00:15:13.936 Submission Queue Entry Size 00:15:13.936 Max: 64 00:15:13.936 Min: 64 00:15:13.936 Completion Queue Entry Size 00:15:13.936 Max: 16 00:15:13.936 Min: 16 00:15:13.936 Number of Namespaces: 1024 00:15:13.936 Compare Command: Not Supported 00:15:13.936 Write Uncorrectable Command: Not Supported 00:15:13.936 Dataset Management Command: Supported 00:15:13.936 Write Zeroes Command: Supported 00:15:13.936 Set Features Save Field: Not Supported 00:15:13.936 Reservations: Not Supported 00:15:13.936 Timestamp: Not Supported 00:15:13.936 Copy: Not Supported 00:15:13.936 Volatile Write Cache: Present 00:15:13.936 Atomic Write Unit (Normal): 1 00:15:13.936 Atomic Write Unit (PFail): 1 00:15:13.936 Atomic Compare & Write Unit: 1 00:15:13.936 Fused Compare & Write: Not Supported 00:15:13.936 Scatter-Gather List 00:15:13.936 SGL Command Set: Supported 00:15:13.936 SGL Keyed: Not Supported 00:15:13.936 SGL Bit Bucket Descriptor: Not Supported 00:15:13.936 SGL Metadata Pointer: Not Supported 00:15:13.936 Oversized SGL: Not Supported 00:15:13.936 SGL Metadata Address: Not Supported 00:15:13.936 SGL Offset: Supported 00:15:13.936 Transport SGL Data Block: Not Supported 00:15:13.936 Replay Protected Memory Block: Not Supported 00:15:13.936 00:15:13.936 Firmware Slot Information 00:15:13.936 ========================= 00:15:13.936 Active slot: 0 00:15:13.936 00:15:13.936 Asymmetric Namespace Access 00:15:13.936 =========================== 00:15:13.936 Change Count : 0 00:15:13.936 Number of ANA Group Descriptors : 1 00:15:13.936 ANA Group Descriptor : 0 00:15:13.936 ANA Group ID : 1 00:15:13.936 Number of NSID Values : 1 00:15:13.936 Change Count : 0 00:15:13.936 ANA State : 1 00:15:13.936 Namespace Identifier : 1 00:15:13.936 00:15:13.936 Commands Supported and Effects 00:15:13.936 ============================== 00:15:13.936 Admin Commands 00:15:13.936 -------------- 00:15:13.936 Get Log Page (02h): Supported 00:15:13.936 Identify (06h): Supported 00:15:13.936 Abort (08h): Supported 00:15:13.936 Set Features (09h): Supported 00:15:13.936 Get Features (0Ah): Supported 00:15:13.936 Asynchronous Event Request (0Ch): Supported 00:15:13.936 Keep Alive (18h): Supported 00:15:13.936 I/O Commands 00:15:13.936 ------------ 00:15:13.936 Flush (00h): Supported 00:15:13.936 Write (01h): Supported LBA-Change 00:15:13.936 Read (02h): Supported 00:15:13.936 Write Zeroes (08h): Supported LBA-Change 00:15:13.936 Dataset Management (09h): Supported 00:15:13.936 00:15:13.936 Error Log 00:15:13.936 ========= 00:15:13.936 Entry: 0 00:15:13.936 Error Count: 0x3 00:15:13.936 Submission Queue Id: 0x0 00:15:13.936 Command Id: 0x5 00:15:13.936 Phase Bit: 0 00:15:13.936 Status Code: 0x2 00:15:13.936 Status Code Type: 0x0 00:15:13.936 Do Not Retry: 1 00:15:13.936 Error Location: 0x28 00:15:13.936 LBA: 0x0 00:15:13.936 Namespace: 0x0 00:15:13.936 Vendor Log Page: 0x0 00:15:13.936 ----------- 00:15:13.936 Entry: 1 00:15:13.936 Error Count: 0x2 00:15:13.936 Submission Queue Id: 0x0 00:15:13.936 Command Id: 0x5 00:15:13.936 Phase Bit: 0 00:15:13.936 Status Code: 0x2 00:15:13.936 Status Code Type: 0x0 00:15:13.936 Do Not Retry: 1 00:15:13.936 Error Location: 0x28 00:15:13.936 LBA: 0x0 00:15:13.936 Namespace: 0x0 00:15:13.936 Vendor Log Page: 0x0 00:15:13.936 ----------- 00:15:13.936 Entry: 2 00:15:13.936 Error Count: 0x1 00:15:13.936 Submission Queue Id: 0x0 00:15:13.936 Command Id: 0x4 00:15:13.936 Phase Bit: 0 00:15:13.936 Status Code: 0x2 00:15:13.936 Status Code Type: 0x0 00:15:13.936 Do Not Retry: 1 00:15:13.936 Error Location: 0x28 00:15:13.936 LBA: 0x0 00:15:13.936 Namespace: 0x0 00:15:13.936 Vendor Log Page: 0x0 00:15:13.936 00:15:13.936 Number of Queues 00:15:13.936 ================ 00:15:13.936 Number of I/O Submission Queues: 128 00:15:13.936 Number of I/O Completion Queues: 128 00:15:13.936 00:15:13.936 ZNS Specific Controller Data 00:15:13.936 ============================ 00:15:13.936 Zone Append Size Limit: 0 00:15:13.936 00:15:13.936 00:15:13.936 Active Namespaces 00:15:13.936 ================= 00:15:13.936 get_feature(0x05) failed 00:15:13.936 Namespace ID:1 00:15:13.936 Command Set Identifier: NVM (00h) 00:15:13.936 Deallocate: Supported 00:15:13.936 Deallocated/Unwritten Error: Not Supported 00:15:13.936 Deallocated Read Value: Unknown 00:15:13.936 Deallocate in Write Zeroes: Not Supported 00:15:13.936 Deallocated Guard Field: 0xFFFF 00:15:13.936 Flush: Supported 00:15:13.936 Reservation: Not Supported 00:15:13.936 Namespace Sharing Capabilities: Multiple Controllers 00:15:13.936 Size (in LBAs): 1310720 (5GiB) 00:15:13.936 Capacity (in LBAs): 1310720 (5GiB) 00:15:13.936 Utilization (in LBAs): 1310720 (5GiB) 00:15:13.936 UUID: f97b73f7-3997-45f9-9bca-eea076a94785 00:15:13.936 Thin Provisioning: Not Supported 00:15:13.936 Per-NS Atomic Units: Yes 00:15:13.936 Atomic Boundary Size (Normal): 0 00:15:13.936 Atomic Boundary Size (PFail): 0 00:15:13.936 Atomic Boundary Offset: 0 00:15:13.936 NGUID/EUI64 Never Reused: No 00:15:13.936 ANA group ID: 1 00:15:13.936 Namespace Write Protected: No 00:15:13.936 Number of LBA Formats: 1 00:15:13.936 Current LBA Format: LBA Format #00 00:15:13.936 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:13.936 00:15:13.936 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:13.936 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:13.936 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:13.937 rmmod nvme_tcp 00:15:13.937 rmmod nvme_fabrics 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:13.937 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:14.194 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:15:14.195 20:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:14.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:15.016 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.016 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.016 00:15:15.016 real 0m2.567s 00:15:15.016 user 0m0.865s 00:15:15.016 sys 0m1.115s 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.016 ************************************ 00:15:15.016 END TEST nvmf_identify_kernel_target 00:15:15.016 ************************************ 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.016 ************************************ 00:15:15.016 START TEST nvmf_auth_host 00:15:15.016 ************************************ 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:15.016 * Looking for test storage... 00:15:15.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:15.016 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:15.274 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:15.274 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.274 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.274 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.274 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.274 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.274 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:15.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.275 --rc genhtml_branch_coverage=1 00:15:15.275 --rc genhtml_function_coverage=1 00:15:15.275 --rc genhtml_legend=1 00:15:15.275 --rc geninfo_all_blocks=1 00:15:15.275 --rc geninfo_unexecuted_blocks=1 00:15:15.275 00:15:15.275 ' 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:15.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.275 --rc genhtml_branch_coverage=1 00:15:15.275 --rc genhtml_function_coverage=1 00:15:15.275 --rc genhtml_legend=1 00:15:15.275 --rc geninfo_all_blocks=1 00:15:15.275 --rc geninfo_unexecuted_blocks=1 00:15:15.275 00:15:15.275 ' 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:15.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.275 --rc genhtml_branch_coverage=1 00:15:15.275 --rc genhtml_function_coverage=1 00:15:15.275 --rc genhtml_legend=1 00:15:15.275 --rc geninfo_all_blocks=1 00:15:15.275 --rc geninfo_unexecuted_blocks=1 00:15:15.275 00:15:15.275 ' 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:15.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.275 --rc genhtml_branch_coverage=1 00:15:15.275 --rc genhtml_function_coverage=1 00:15:15.275 --rc genhtml_legend=1 00:15:15.275 --rc geninfo_all_blocks=1 00:15:15.275 --rc geninfo_unexecuted_blocks=1 00:15:15.275 00:15:15.275 ' 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.275 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:15.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:15.276 Cannot find device "nvmf_init_br" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:15.276 Cannot find device "nvmf_init_br2" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:15.276 Cannot find device "nvmf_tgt_br" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.276 Cannot find device "nvmf_tgt_br2" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:15.276 Cannot find device "nvmf_init_br" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:15.276 Cannot find device "nvmf_init_br2" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:15.276 Cannot find device "nvmf_tgt_br" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:15.276 Cannot find device "nvmf_tgt_br2" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:15.276 Cannot find device "nvmf_br" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:15.276 Cannot find device "nvmf_init_if" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:15.276 Cannot find device "nvmf_init_if2" 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:15.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:15.276 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:15.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:15.277 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:15.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:15.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:15.535 00:15:15.535 --- 10.0.0.3 ping statistics --- 00:15:15.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.535 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:15.535 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:15.535 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:15:15.535 00:15:15.535 --- 10.0.0.4 ping statistics --- 00:15:15.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.535 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:15.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:15.535 00:15:15.535 --- 10.0.0.1 ping statistics --- 00:15:15.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.535 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:15.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:15.535 00:15:15.535 --- 10.0.0.2 ping statistics --- 00:15:15.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.535 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=77303 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 77303 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77303 ']' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.535 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8321ff51485c624a5ac978fc44ecda09 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DPZ 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8321ff51485c624a5ac978fc44ecda09 0 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8321ff51485c624a5ac978fc44ecda09 0 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8321ff51485c624a5ac978fc44ecda09 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DPZ 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DPZ 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DPZ 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.536 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=80b33b688c33cfacd22f3579aef296bbc5c0ed1506f5aa767408a0b116836113 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hqZ 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 80b33b688c33cfacd22f3579aef296bbc5c0ed1506f5aa767408a0b116836113 3 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 80b33b688c33cfacd22f3579aef296bbc5c0ed1506f5aa767408a0b116836113 3 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=80b33b688c33cfacd22f3579aef296bbc5c0ed1506f5aa767408a0b116836113 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hqZ 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hqZ 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hqZ 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=70a08b1447a9d7dd950a4a2d50532903b67abb36752b5c85 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.LyD 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 70a08b1447a9d7dd950a4a2d50532903b67abb36752b5c85 0 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 70a08b1447a9d7dd950a4a2d50532903b67abb36752b5c85 0 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=70a08b1447a9d7dd950a4a2d50532903b67abb36752b5c85 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:16.537 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.LyD 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.LyD 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LyD 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ca491343cc6ad2417dbadcf97fae31f2709712c4425ee7b 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kQK 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ca491343cc6ad2417dbadcf97fae31f2709712c4425ee7b 2 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ca491343cc6ad2417dbadcf97fae31f2709712c4425ee7b 2 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ca491343cc6ad2417dbadcf97fae31f2709712c4425ee7b 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kQK 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kQK 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.kQK 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ee0243369c013d532cbc507e9b994c3 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.k3O 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ee0243369c013d532cbc507e9b994c3 1 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ee0243369c013d532cbc507e9b994c3 1 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ee0243369c013d532cbc507e9b994c3 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:16.537 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.k3O 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.k3O 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.k3O 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=22a7043d7df11b3a71a5c3fdd6f2f446 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.q1t 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 22a7043d7df11b3a71a5c3fdd6f2f446 1 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 22a7043d7df11b3a71a5c3fdd6f2f446 1 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=22a7043d7df11b3a71a5c3fdd6f2f446 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.q1t 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.q1t 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.q1t 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.795 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac54fa11e28ed9b93bbf92af2b8f4d61717c7a74ca5584ab 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.A8Y 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac54fa11e28ed9b93bbf92af2b8f4d61717c7a74ca5584ab 2 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac54fa11e28ed9b93bbf92af2b8f4d61717c7a74ca5584ab 2 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac54fa11e28ed9b93bbf92af2b8f4d61717c7a74ca5584ab 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.A8Y 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.A8Y 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.A8Y 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=15373d53e83fe57376380ad90ae91aa4 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ckt 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 15373d53e83fe57376380ad90ae91aa4 0 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 15373d53e83fe57376380ad90ae91aa4 0 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=15373d53e83fe57376380ad90ae91aa4 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ckt 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ckt 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Ckt 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21d31382b72291dfed981a4189e2d252290e3036b83f7174dc526b77f3605464 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.TP8 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21d31382b72291dfed981a4189e2d252290e3036b83f7174dc526b77f3605464 3 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21d31382b72291dfed981a4189e2d252290e3036b83f7174dc526b77f3605464 3 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21d31382b72291dfed981a4189e2d252290e3036b83f7174dc526b77f3605464 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.TP8 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.TP8 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.TP8 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77303 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77303 ']' 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.796 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.053 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.053 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:17.053 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DPZ 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hqZ ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hqZ 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LyD 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.kQK ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kQK 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.k3O 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.q1t ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.q1t 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.A8Y 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Ckt ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Ckt 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.TP8 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:17.054 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:17.311 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:17.311 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:17.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:17.570 Waiting for block devices as requested 00:15:17.570 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:17.570 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:18.137 No valid GPT data, bailing 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:18.137 No valid GPT data, bailing 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:18.137 No valid GPT data, bailing 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:18.137 No valid GPT data, bailing 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -a 10.0.0.1 -t tcp -s 4420 00:15:18.137 00:15:18.137 Discovery Log Number of Records 2, Generation counter 2 00:15:18.137 =====Discovery Log Entry 0====== 00:15:18.137 trtype: tcp 00:15:18.137 adrfam: ipv4 00:15:18.137 subtype: current discovery subsystem 00:15:18.137 treq: not specified, sq flow control disable supported 00:15:18.137 portid: 1 00:15:18.137 trsvcid: 4420 00:15:18.137 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:18.137 traddr: 10.0.0.1 00:15:18.137 eflags: none 00:15:18.137 sectype: none 00:15:18.137 =====Discovery Log Entry 1====== 00:15:18.137 trtype: tcp 00:15:18.137 adrfam: ipv4 00:15:18.137 subtype: nvme subsystem 00:15:18.137 treq: not specified, sq flow control disable supported 00:15:18.137 portid: 1 00:15:18.137 trsvcid: 4420 00:15:18.137 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:18.137 traddr: 10.0.0.1 00:15:18.137 eflags: none 00:15:18.137 sectype: none 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:18.137 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:18.138 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:18.138 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:18.138 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:18.138 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:18.138 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.417 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.418 nvme0n1 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.418 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.676 nvme0n1 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.676 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.676 nvme0n1 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.677 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 nvme0n1 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 nvme0n1 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.193 nvme0n1 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:19.193 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.451 nvme0n1 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.451 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.709 nvme0n1 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.709 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.710 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 nvme0n1 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:19.967 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.968 nvme0n1 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.968 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.225 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.226 nvme0n1 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:20.226 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.823 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.080 nvme0n1 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:21.080 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.081 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.339 nvme0n1 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:21.339 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:21.340 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.340 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.340 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.598 nvme0n1 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:15:21.598 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.599 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.856 nvme0n1 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:21.856 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.857 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.114 nvme0n1 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:22.114 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.011 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.270 nvme0n1 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.270 20:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.589 nvme0n1 00:15:24.589 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.589 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:24.589 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:24.589 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.589 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.849 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.107 nvme0n1 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.107 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.673 nvme0n1 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.673 20:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.673 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.938 nvme0n1 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.938 20:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.878 nvme0n1 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:26.878 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.879 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.446 nvme0n1 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.446 20:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.044 nvme0n1 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:28.044 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.045 20:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 nvme0n1 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.630 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.631 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.889 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.454 nvme0n1 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.454 20:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 nvme0n1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 nvme0n1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.727 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 nvme0n1 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.986 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.245 nvme0n1 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.245 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.505 nvme0n1 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.505 20:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.505 nvme0n1 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.505 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.765 nvme0n1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.765 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.024 nvme0n1 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.024 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.283 nvme0n1 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:31.283 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.284 nvme0n1 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.284 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.544 20:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.544 nvme0n1 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:31.544 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.545 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.830 nvme0n1 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.830 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.089 nvme0n1 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:32.089 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.090 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.349 nvme0n1 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.349 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.607 nvme0n1 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.608 20:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.608 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.865 nvme0n1 00:15:32.865 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.865 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.865 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.865 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.865 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.865 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.123 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.381 nvme0n1 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.381 20:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.948 nvme0n1 00:15:33.948 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.948 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:33.948 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:33.948 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.948 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.948 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.949 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.516 nvme0n1 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.516 20:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.081 nvme0n1 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.081 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.082 20:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.029 nvme0n1 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:36.029 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.030 20:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 nvme0n1 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.596 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 nvme0n1 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:15:37.531 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.532 20:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.532 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.480 nvme0n1 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:38.480 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.416 nvme0n1 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.416 nvme0n1 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:39.416 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.417 20:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.675 nvme0n1 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.675 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.676 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.934 nvme0n1 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.934 nvme0n1 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.934 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.192 nvme0n1 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:15:40.192 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.193 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.451 nvme0n1 00:15:40.451 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.452 20:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.711 nvme0n1 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.711 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.970 nvme0n1 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.970 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.971 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.971 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:40.971 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.971 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.230 nvme0n1 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.230 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.489 nvme0n1 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.489 20:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 nvme0n1 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:41.748 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.749 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.008 nvme0n1 00:15:42.008 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.008 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.008 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.008 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.008 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:42.008 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.269 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.529 nvme0n1 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.529 20:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.789 nvme0n1 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.790 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.047 nvme0n1 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:43.047 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.048 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.612 nvme0n1 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.612 20:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.177 nvme0n1 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.177 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.434 nvme0n1 00:15:44.434 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.434 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.434 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.434 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.434 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.434 20:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.702 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.702 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.703 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.994 nvme0n1 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.994 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.252 20:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.510 nvme0n1 00:15:45.510 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.510 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:45.510 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:45.510 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.510 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.510 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMyMWZmNTE0ODVjNjI0YTVhYzk3OGZjNDRlY2RhMDkGjslj: 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODBiMzNiNjg4YzMzY2ZhY2QyMmYzNTc5YWVmMjk2YmJjNWMwZWQxNTA2ZjVhYTc2NzQwOGEwYjExNjgzNjExM0qcjm0=: 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.768 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.333 nvme0n1 00:15:46.333 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.333 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:46.333 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.333 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.333 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.591 20:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.529 nvme0n1 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.529 20:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.101 nvme0n1 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWM1NGZhMTFlMjhlZDliOTNiYmY5MmFmMmI4ZjRkNjE3MTdjN2E3NGNhNTU4NGFiVL/3+A==: 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTUzNzNkNTNlODNmZTU3Mzc2MzgwYWQ5MGFlOTFhYTQtDa//: 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.101 20:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.033 nvme0n1 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjFkMzEzODJiNzIyOTFkZmVkOTgxYTQxODllMmQyNTIyOTBlMzAzNmI4M2Y3MTc0ZGM1MjZiNzdmMzYwNTQ2NH2UJVY=: 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.033 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.034 20:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.600 nvme0n1 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.600 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.600 request: 00:15:49.600 { 00:15:49.600 "name": "nvme0", 00:15:49.600 "trtype": "tcp", 00:15:49.600 "traddr": "10.0.0.1", 00:15:49.600 "adrfam": "ipv4", 00:15:49.600 "trsvcid": "4420", 00:15:49.600 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:49.600 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:49.600 "prchk_reftag": false, 00:15:49.600 "prchk_guard": false, 00:15:49.600 "hdgst": false, 00:15:49.600 "ddgst": false, 00:15:49.600 "allow_unrecognized_csi": false, 00:15:49.600 "method": "bdev_nvme_attach_controller", 00:15:49.600 "req_id": 1 00:15:49.600 } 00:15:49.600 Got JSON-RPC error response 00:15:49.600 response: 00:15:49.600 { 00:15:49.600 "code": -5, 00:15:49.600 "message": "Input/output error" 00:15:49.601 } 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.601 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.859 request: 00:15:49.859 { 00:15:49.859 "name": "nvme0", 00:15:49.859 "trtype": "tcp", 00:15:49.859 "traddr": "10.0.0.1", 00:15:49.859 "adrfam": "ipv4", 00:15:49.859 "trsvcid": "4420", 00:15:49.859 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:49.859 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:49.859 "prchk_reftag": false, 00:15:49.859 "prchk_guard": false, 00:15:49.859 "hdgst": false, 00:15:49.859 "ddgst": false, 00:15:49.859 "dhchap_key": "key2", 00:15:49.859 "allow_unrecognized_csi": false, 00:15:49.859 "method": "bdev_nvme_attach_controller", 00:15:49.859 "req_id": 1 00:15:49.859 } 00:15:49.859 Got JSON-RPC error response 00:15:49.859 response: 00:15:49.859 { 00:15:49.859 "code": -5, 00:15:49.859 "message": "Input/output error" 00:15:49.859 } 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.859 request: 00:15:49.859 { 00:15:49.859 "name": "nvme0", 00:15:49.859 "trtype": "tcp", 00:15:49.859 "traddr": "10.0.0.1", 00:15:49.859 "adrfam": "ipv4", 00:15:49.859 "trsvcid": "4420", 00:15:49.859 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:49.859 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:49.859 "prchk_reftag": false, 00:15:49.859 "prchk_guard": false, 00:15:49.859 "hdgst": false, 00:15:49.859 "ddgst": false, 00:15:49.859 "dhchap_key": "key1", 00:15:49.859 "dhchap_ctrlr_key": "ckey2", 00:15:49.859 "allow_unrecognized_csi": false, 00:15:49.859 "method": "bdev_nvme_attach_controller", 00:15:49.859 "req_id": 1 00:15:49.859 } 00:15:49.859 Got JSON-RPC error response 00:15:49.859 response: 00:15:49.859 { 00:15:49.859 "code": -5, 00:15:49.859 "message": "Input/output error" 00:15:49.859 } 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.859 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.117 nvme0n1 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:50.117 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.118 request: 00:15:50.118 { 00:15:50.118 "name": "nvme0", 00:15:50.118 "dhchap_key": "key1", 00:15:50.118 "dhchap_ctrlr_key": "ckey2", 00:15:50.118 "method": "bdev_nvme_set_keys", 00:15:50.118 "req_id": 1 00:15:50.118 } 00:15:50.118 Got JSON-RPC error response 00:15:50.118 response: 00:15:50.118 { 00:15:50.118 "code": -13, 00:15:50.118 "message": "Permission denied" 00:15:50.118 } 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:15:50.118 20:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.051 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:51.052 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBhMDhiMTQ0N2E5ZDdkZDk1MGE0YTJkNTA1MzI5MDNiNjdhYmIzNjc1MmI1Yzg1u+LiMw==: 00:15:51.052 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: ]] 00:15:51.052 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNhNDkxMzQzY2M2YWQyNDE3ZGJhZGNmOTdmYWUzMWYyNzA5NzEyYzQ0MjVlZTdiuFQz+A==: 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.310 nvme0n1 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGVlMDI0MzM2OWMwMTNkNTMyY2JjNTA3ZTliOTk0YzPiST4Q: 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJhNzA0M2Q3ZGYxMWIzYTcxYTVjM2ZkZDZmMmY0NDaH/dOn: 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.310 request: 00:15:51.310 { 00:15:51.310 "name": "nvme0", 00:15:51.310 "dhchap_key": "key2", 00:15:51.310 "dhchap_ctrlr_key": "ckey1", 00:15:51.310 "method": "bdev_nvme_set_keys", 00:15:51.310 "req_id": 1 00:15:51.310 } 00:15:51.310 Got JSON-RPC error response 00:15:51.310 response: 00:15:51.310 { 00:15:51.310 "code": -13, 00:15:51.310 "message": "Permission denied" 00:15:51.310 } 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:15:51.310 20:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:15:52.329 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.329 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:15:52.329 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.329 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.330 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.330 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:15:52.330 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:15:52.330 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:15:52.330 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:15:52.330 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:52.330 20:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.265 rmmod nvme_tcp 00:15:53.265 rmmod nvme_fabrics 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 77303 ']' 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 77303 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 77303 ']' 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 77303 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77303 00:15:53.265 killing process with pid 77303 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77303' 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 77303 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 77303 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:53.265 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.523 20:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:15:53.523 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:15:53.782 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:54.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:54.348 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:54.348 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:54.348 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DPZ /tmp/spdk.key-null.LyD /tmp/spdk.key-sha256.k3O /tmp/spdk.key-sha384.A8Y /tmp/spdk.key-sha512.TP8 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:15:54.348 20:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:54.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:54.914 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:54.914 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:54.914 00:15:54.914 real 0m39.784s 00:15:54.914 user 0m31.472s 00:15:54.914 sys 0m3.226s 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.914 ************************************ 00:15:54.914 END TEST nvmf_auth_host 00:15:54.914 ************************************ 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.914 ************************************ 00:15:54.914 START TEST nvmf_digest 00:15:54.914 ************************************ 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:54.914 * Looking for test storage... 00:15:54.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.914 --rc genhtml_branch_coverage=1 00:15:54.914 --rc genhtml_function_coverage=1 00:15:54.914 --rc genhtml_legend=1 00:15:54.914 --rc geninfo_all_blocks=1 00:15:54.914 --rc geninfo_unexecuted_blocks=1 00:15:54.914 00:15:54.914 ' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.914 --rc genhtml_branch_coverage=1 00:15:54.914 --rc genhtml_function_coverage=1 00:15:54.914 --rc genhtml_legend=1 00:15:54.914 --rc geninfo_all_blocks=1 00:15:54.914 --rc geninfo_unexecuted_blocks=1 00:15:54.914 00:15:54.914 ' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.914 --rc genhtml_branch_coverage=1 00:15:54.914 --rc genhtml_function_coverage=1 00:15:54.914 --rc genhtml_legend=1 00:15:54.914 --rc geninfo_all_blocks=1 00:15:54.914 --rc geninfo_unexecuted_blocks=1 00:15:54.914 00:15:54.914 ' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.914 --rc genhtml_branch_coverage=1 00:15:54.914 --rc genhtml_function_coverage=1 00:15:54.914 --rc genhtml_legend=1 00:15:54.914 --rc geninfo_all_blocks=1 00:15:54.914 --rc geninfo_unexecuted_blocks=1 00:15:54.914 00:15:54.914 ' 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.914 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.173 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:55.173 Cannot find device "nvmf_init_br" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:55.173 Cannot find device "nvmf_init_br2" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:55.173 Cannot find device "nvmf_tgt_br" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.173 Cannot find device "nvmf_tgt_br2" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:55.173 Cannot find device "nvmf_init_br" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:55.173 Cannot find device "nvmf_init_br2" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:55.173 Cannot find device "nvmf_tgt_br" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:55.173 Cannot find device "nvmf_tgt_br2" 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:15:55.173 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.173 Cannot find device "nvmf_br" 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.174 Cannot find device "nvmf_init_if" 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.174 Cannot find device "nvmf_init_if2" 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.174 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:55.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:55.432 00:15:55.432 --- 10.0.0.3 ping statistics --- 00:15:55.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.432 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:55.432 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:55.432 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:15:55.432 00:15:55.432 --- 10.0.0.4 ping statistics --- 00:15:55.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.432 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:55.432 00:15:55.432 --- 10.0.0.1 ping statistics --- 00:15:55.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.432 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:55.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:15:55.432 00:15:55.432 --- 10.0.0.2 ping statistics --- 00:15:55.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.432 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:15:55.432 ************************************ 00:15:55.432 START TEST nvmf_digest_clean 00:15:55.432 ************************************ 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=78987 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 78987 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78987 ']' 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.432 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.433 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.433 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.433 20:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:55.433 [2024-11-26 20:39:09.867389] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:15:55.433 [2024-11-26 20:39:09.867438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.689 [2024-11-26 20:39:10.009603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.689 [2024-11-26 20:39:10.048923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.689 [2024-11-26 20:39:10.048962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.689 [2024-11-26 20:39:10.048969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.689 [2024-11-26 20:39:10.048974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.689 [2024-11-26 20:39:10.048978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.689 [2024-11-26 20:39:10.049280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.260 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:56.519 [2024-11-26 20:39:10.829512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.519 null0 00:15:56.519 [2024-11-26 20:39:10.869918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.519 [2024-11-26 20:39:10.893983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79019 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79019 /var/tmp/bperf.sock 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79019 ']' 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:56.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:56.519 20:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:56.519 [2024-11-26 20:39:10.937346] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:15:56.519 [2024-11-26 20:39:10.937565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79019 ] 00:15:56.778 [2024-11-26 20:39:11.075168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.778 [2024-11-26 20:39:11.112788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.344 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.344 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:15:57.344 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:15:57.344 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:15:57.344 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:57.604 [2024-11-26 20:39:12.020723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.604 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:57.604 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:57.865 nvme0n1 00:15:57.865 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:15:57.865 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:58.124 Running I/O for 2 seconds... 00:16:00.005 15240.00 IOPS, 59.53 MiB/s [2024-11-26T20:39:14.560Z] 15176.50 IOPS, 59.28 MiB/s 00:16:00.005 Latency(us) 00:16:00.005 [2024-11-26T20:39:14.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.005 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:00.005 nvme0n1 : 2.01 15207.22 59.40 0.00 0.00 8411.24 7914.73 17845.96 00:16:00.005 [2024-11-26T20:39:14.560Z] =================================================================================================================== 00:16:00.005 [2024-11-26T20:39:14.560Z] Total : 15207.22 59.40 0.00 0.00 8411.24 7914.73 17845.96 00:16:00.005 { 00:16:00.005 "results": [ 00:16:00.005 { 00:16:00.005 "job": "nvme0n1", 00:16:00.006 "core_mask": "0x2", 00:16:00.006 "workload": "randread", 00:16:00.006 "status": "finished", 00:16:00.006 "queue_depth": 128, 00:16:00.006 "io_size": 4096, 00:16:00.006 "runtime": 2.012728, 00:16:00.006 "iops": 15207.221244003164, 00:16:00.006 "mibps": 59.40320798438736, 00:16:00.006 "io_failed": 0, 00:16:00.006 "io_timeout": 0, 00:16:00.006 "avg_latency_us": 8411.236221802243, 00:16:00.006 "min_latency_us": 7914.732307692308, 00:16:00.006 "max_latency_us": 17845.956923076923 00:16:00.006 } 00:16:00.006 ], 00:16:00.006 "core_count": 1 00:16:00.006 } 00:16:00.006 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:00.006 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:00.006 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:00.006 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:00.006 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:00.006 | select(.opcode=="crc32c") 00:16:00.006 | "\(.module_name) \(.executed)"' 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79019 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79019 ']' 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79019 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79019 00:16:00.266 killing process with pid 79019 00:16:00.266 Received shutdown signal, test time was about 2.000000 seconds 00:16:00.266 00:16:00.266 Latency(us) 00:16:00.266 [2024-11-26T20:39:14.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.266 [2024-11-26T20:39:14.821Z] =================================================================================================================== 00:16:00.266 [2024-11-26T20:39:14.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79019' 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79019 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79019 00:16:00.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79074 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79074 /var/tmp/bperf.sock 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79074 ']' 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:00.266 20:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:00.525 [2024-11-26 20:39:14.846635] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:00.525 [2024-11-26 20:39:14.846887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:16:00.525 Zero copy mechanism will not be used. 00:16:00.525 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79074 ] 00:16:00.525 [2024-11-26 20:39:14.988754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.525 [2024-11-26 20:39:15.028459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.460 20:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.460 20:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:01.460 20:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:01.460 20:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:01.460 20:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:01.460 [2024-11-26 20:39:15.940991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:01.461 20:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:01.461 20:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:01.718 nvme0n1 00:16:01.718 20:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:01.718 20:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:01.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:01.976 Zero copy mechanism will not be used. 00:16:01.976 Running I/O for 2 seconds... 00:16:03.853 8800.00 IOPS, 1100.00 MiB/s [2024-11-26T20:39:18.408Z] 8856.00 IOPS, 1107.00 MiB/s 00:16:03.853 Latency(us) 00:16:03.853 [2024-11-26T20:39:18.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.853 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:03.853 nvme0n1 : 2.00 8851.79 1106.47 0.00 0.00 1804.21 1651.00 3503.66 00:16:03.853 [2024-11-26T20:39:18.408Z] =================================================================================================================== 00:16:03.853 [2024-11-26T20:39:18.408Z] Total : 8851.79 1106.47 0.00 0.00 1804.21 1651.00 3503.66 00:16:03.853 { 00:16:03.853 "results": [ 00:16:03.853 { 00:16:03.853 "job": "nvme0n1", 00:16:03.853 "core_mask": "0x2", 00:16:03.853 "workload": "randread", 00:16:03.853 "status": "finished", 00:16:03.853 "queue_depth": 16, 00:16:03.853 "io_size": 131072, 00:16:03.853 "runtime": 2.002758, 00:16:03.853 "iops": 8851.793376933208, 00:16:03.853 "mibps": 1106.474172116651, 00:16:03.853 "io_failed": 0, 00:16:03.853 "io_timeout": 0, 00:16:03.853 "avg_latency_us": 1804.2149180783117, 00:16:03.853 "min_latency_us": 1651.0030769230768, 00:16:03.853 "max_latency_us": 3503.6553846153847 00:16:03.853 } 00:16:03.853 ], 00:16:03.853 "core_count": 1 00:16:03.853 } 00:16:03.853 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:03.853 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:03.853 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:03.853 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:03.853 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:03.853 | select(.opcode=="crc32c") 00:16:03.853 | "\(.module_name) \(.executed)"' 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79074 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79074 ']' 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79074 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79074 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79074' 00:16:04.111 killing process with pid 79074 00:16:04.111 Received shutdown signal, test time was about 2.000000 seconds 00:16:04.111 00:16:04.111 Latency(us) 00:16:04.111 [2024-11-26T20:39:18.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.111 [2024-11-26T20:39:18.666Z] =================================================================================================================== 00:16:04.111 [2024-11-26T20:39:18.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79074 00:16:04.111 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79074 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79135 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79135 /var/tmp/bperf.sock 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79135 ']' 00:16:04.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.370 20:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:04.370 [2024-11-26 20:39:18.794472] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:04.370 [2024-11-26 20:39:18.794536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79135 ] 00:16:04.627 [2024-11-26 20:39:18.933929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.627 [2024-11-26 20:39:18.972956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.193 20:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.193 20:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:05.193 20:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:05.193 20:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:05.193 20:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:05.453 [2024-11-26 20:39:19.888086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:05.453 20:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:05.453 20:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:05.710 nvme0n1 00:16:05.710 20:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:05.710 20:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:05.968 Running I/O for 2 seconds... 00:16:07.880 15876.00 IOPS, 62.02 MiB/s [2024-11-26T20:39:22.435Z] 15812.00 IOPS, 61.77 MiB/s 00:16:07.880 Latency(us) 00:16:07.880 [2024-11-26T20:39:22.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.880 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.880 nvme0n1 : 2.01 15797.57 61.71 0.00 0.00 8095.95 7662.67 16434.41 00:16:07.880 [2024-11-26T20:39:22.435Z] =================================================================================================================== 00:16:07.880 [2024-11-26T20:39:22.435Z] Total : 15797.57 61.71 0.00 0.00 8095.95 7662.67 16434.41 00:16:07.880 { 00:16:07.880 "results": [ 00:16:07.880 { 00:16:07.880 "job": "nvme0n1", 00:16:07.880 "core_mask": "0x2", 00:16:07.880 "workload": "randwrite", 00:16:07.880 "status": "finished", 00:16:07.880 "queue_depth": 128, 00:16:07.880 "io_size": 4096, 00:16:07.880 "runtime": 2.00993, 00:16:07.880 "iops": 15797.565089331469, 00:16:07.880 "mibps": 61.70923863020105, 00:16:07.880 "io_failed": 0, 00:16:07.880 "io_timeout": 0, 00:16:07.880 "avg_latency_us": 8095.950359904646, 00:16:07.880 "min_latency_us": 7662.670769230769, 00:16:07.880 "max_latency_us": 16434.412307692306 00:16:07.880 } 00:16:07.880 ], 00:16:07.880 "core_count": 1 00:16:07.880 } 00:16:07.880 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:07.880 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:07.880 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:07.880 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:07.880 | select(.opcode=="crc32c") 00:16:07.880 | "\(.module_name) \(.executed)"' 00:16:07.880 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79135 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79135 ']' 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79135 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79135 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79135' 00:16:08.138 killing process with pid 79135 00:16:08.138 Received shutdown signal, test time was about 2.000000 seconds 00:16:08.138 00:16:08.138 Latency(us) 00:16:08.138 [2024-11-26T20:39:22.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.138 [2024-11-26T20:39:22.693Z] =================================================================================================================== 00:16:08.138 [2024-11-26T20:39:22.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79135 00:16:08.138 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79135 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79196 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79196 /var/tmp/bperf.sock 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79196 ']' 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:08.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.396 20:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:08.396 [2024-11-26 20:39:22.760304] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:08.396 [2024-11-26 20:39:22.760387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:16:08.396 Zero copy mechanism will not be used. 00:16:08.396 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79196 ] 00:16:08.396 [2024-11-26 20:39:22.896448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.396 [2024-11-26 20:39:22.935757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.329 20:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.329 20:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:09.329 20:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:09.329 20:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:09.329 20:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:09.329 [2024-11-26 20:39:23.880782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.588 20:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:09.588 20:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:09.846 nvme0n1 00:16:09.846 20:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:09.846 20:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:09.846 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:09.846 Zero copy mechanism will not be used. 00:16:09.846 Running I/O for 2 seconds... 00:16:12.155 8736.00 IOPS, 1092.00 MiB/s [2024-11-26T20:39:26.710Z] 8735.50 IOPS, 1091.94 MiB/s 00:16:12.155 Latency(us) 00:16:12.155 [2024-11-26T20:39:26.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.155 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:12.155 nvme0n1 : 2.00 8729.08 1091.14 0.00 0.00 1829.06 1235.10 3604.48 00:16:12.155 [2024-11-26T20:39:26.710Z] =================================================================================================================== 00:16:12.155 [2024-11-26T20:39:26.710Z] Total : 8729.08 1091.14 0.00 0.00 1829.06 1235.10 3604.48 00:16:12.155 { 00:16:12.155 "results": [ 00:16:12.155 { 00:16:12.155 "job": "nvme0n1", 00:16:12.155 "core_mask": "0x2", 00:16:12.155 "workload": "randwrite", 00:16:12.155 "status": "finished", 00:16:12.155 "queue_depth": 16, 00:16:12.155 "io_size": 131072, 00:16:12.155 "runtime": 2.003418, 00:16:12.155 "iops": 8729.081998863941, 00:16:12.155 "mibps": 1091.1352498579927, 00:16:12.155 "io_failed": 0, 00:16:12.155 "io_timeout": 0, 00:16:12.155 "avg_latency_us": 1829.0557702864382, 00:16:12.155 "min_latency_us": 1235.1015384615384, 00:16:12.155 "max_latency_us": 3604.48 00:16:12.155 } 00:16:12.155 ], 00:16:12.155 "core_count": 1 00:16:12.155 } 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:12.155 | select(.opcode=="crc32c") 00:16:12.155 | "\(.module_name) \(.executed)"' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79196 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79196 ']' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79196 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79196 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:12.155 killing process with pid 79196 00:16:12.155 Received shutdown signal, test time was about 2.000000 seconds 00:16:12.155 00:16:12.155 Latency(us) 00:16:12.155 [2024-11-26T20:39:26.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.155 [2024-11-26T20:39:26.710Z] =================================================================================================================== 00:16:12.155 [2024-11-26T20:39:26.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79196' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79196 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79196 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 78987 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78987 ']' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78987 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78987 00:16:12.155 killing process with pid 78987 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78987' 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78987 00:16:12.155 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78987 00:16:12.414 00:16:12.414 real 0m16.979s 00:16:12.414 user 0m33.125s 00:16:12.414 sys 0m3.666s 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.414 ************************************ 00:16:12.414 END TEST nvmf_digest_clean 00:16:12.414 ************************************ 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:12.414 ************************************ 00:16:12.414 START TEST nvmf_digest_error 00:16:12.414 ************************************ 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:16:12.414 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:12.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79280 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79280 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79280 ']' 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.415 20:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:12.415 [2024-11-26 20:39:26.915668] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:12.415 [2024-11-26 20:39:26.916217] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.673 [2024-11-26 20:39:27.058072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.673 [2024-11-26 20:39:27.093780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.673 [2024-11-26 20:39:27.093817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.673 [2024-11-26 20:39:27.093824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.673 [2024-11-26 20:39:27.093829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.673 [2024-11-26 20:39:27.093833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.673 [2024-11-26 20:39:27.094108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.238 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.238 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:13.238 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:13.238 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:13.238 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:13.496 [2024-11-26 20:39:27.826469] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:13.496 [2024-11-26 20:39:27.865236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:13.496 null0 00:16:13.496 [2024-11-26 20:39:27.906396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.496 [2024-11-26 20:39:27.930467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:13.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:13.496 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79312 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79312 /var/tmp/bperf.sock 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79312 ']' 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:13.497 20:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:13.497 [2024-11-26 20:39:27.973726] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:13.497 [2024-11-26 20:39:27.974628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79312 ] 00:16:13.754 [2024-11-26 20:39:28.115341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.754 [2024-11-26 20:39:28.152515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.754 [2024-11-26 20:39:28.185743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.321 20:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.321 20:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:14.321 20:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:14.321 20:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:14.578 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:14.578 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.578 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:14.578 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.578 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:14.578 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:14.837 nvme0n1 00:16:14.837 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:14.837 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.837 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:14.837 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.837 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:14.837 20:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:15.117 Running I/O for 2 seconds... 00:16:15.117 [2024-11-26 20:39:29.491327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.117 [2024-11-26 20:39:29.491796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.117 [2024-11-26 20:39:29.492020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.117 [2024-11-26 20:39:29.508670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.117 [2024-11-26 20:39:29.508870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.117 [2024-11-26 20:39:29.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.117 [2024-11-26 20:39:29.525746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.117 [2024-11-26 20:39:29.525909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.117 [2024-11-26 20:39:29.526093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.117 [2024-11-26 20:39:29.542826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.117 [2024-11-26 20:39:29.542981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.117 [2024-11-26 20:39:29.543083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.117 [2024-11-26 20:39:29.559719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.117 [2024-11-26 20:39:29.559879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.117 [2024-11-26 20:39:29.560299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.117 [2024-11-26 20:39:29.576993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.117 [2024-11-26 20:39:29.577160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.117 [2024-11-26 20:39:29.577212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.117 [2024-11-26 20:39:29.593791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.117 [2024-11-26 20:39:29.593931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.117 [2024-11-26 20:39:29.593998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.118 [2024-11-26 20:39:29.610580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.118 [2024-11-26 20:39:29.610727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.118 [2024-11-26 20:39:29.610789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.118 [2024-11-26 20:39:29.627324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.118 [2024-11-26 20:39:29.627461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.118 [2024-11-26 20:39:29.627514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.118 [2024-11-26 20:39:29.644236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.118 [2024-11-26 20:39:29.644391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.118 [2024-11-26 20:39:29.644503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.118 [2024-11-26 20:39:29.661106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.118 [2024-11-26 20:39:29.661258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.118 [2024-11-26 20:39:29.661757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.678765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.678977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.679494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.696448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.696637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.696701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.713390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.713538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.713612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.730220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.730373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.730481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.747102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.747282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.747761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.764420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.764602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.765356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.788231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.788398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.788456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.805087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.805237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.805678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.822442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.822631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.822759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.839320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.430 [2024-11-26 20:39:29.839478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.430 [2024-11-26 20:39:29.839577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.430 [2024-11-26 20:39:29.856148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.431 [2024-11-26 20:39:29.856290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.431 [2024-11-26 20:39:29.856703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.431 [2024-11-26 20:39:29.873386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.431 [2024-11-26 20:39:29.873466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.431 [2024-11-26 20:39:29.873517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.431 [2024-11-26 20:39:29.890117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.431 [2024-11-26 20:39:29.890262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.431 [2024-11-26 20:39:29.890330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.431 [2024-11-26 20:39:29.906866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.431 [2024-11-26 20:39:29.907004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.431 [2024-11-26 20:39:29.907067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:29.923615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:29.923756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:29.923798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:29.940342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:29.940481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:29.940540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:29.957123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:29.957278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:29.957715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:29.974398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:29.974574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:29.974717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:29.991302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:29.991464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:29.991575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:30.008486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:30.008700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:30.008822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:30.025465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:30.025646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:30.025763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.689 [2024-11-26 20:39:30.042353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.689 [2024-11-26 20:39:30.042518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.689 [2024-11-26 20:39:30.042977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.059639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.059809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.060260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.076934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.093892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.094046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.094174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.110731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.110886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.110992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.127651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.127803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.127980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.144542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.144716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.144825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.161437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.161580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.161611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.178131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.178158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.178165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.194708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.194734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.194742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.211258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.211284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.211291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.690 [2024-11-26 20:39:30.227805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.690 [2024-11-26 20:39:30.227830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.690 [2024-11-26 20:39:30.227837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.244307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.244333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.244340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.260845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.260871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.260879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.277386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.277417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.277424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.293961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.294010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.294021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.310529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.310557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.310563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.327029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.327055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.327062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.343555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.343582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.343597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.360131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.360159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.360166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.376692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.376719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.376726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.393265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.393291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.393298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.409766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.948 [2024-11-26 20:39:30.409791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.948 [2024-11-26 20:39:30.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.948 [2024-11-26 20:39:30.426317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.949 [2024-11-26 20:39:30.426342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.949 [2024-11-26 20:39:30.426349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.949 [2024-11-26 20:39:30.442835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.949 [2024-11-26 20:39:30.442861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.949 [2024-11-26 20:39:30.442868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.949 [2024-11-26 20:39:30.459320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.949 [2024-11-26 20:39:30.459346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.949 [2024-11-26 20:39:30.459353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.949 14801.00 IOPS, 57.82 MiB/s [2024-11-26T20:39:30.504Z] [2024-11-26 20:39:30.475870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.949 [2024-11-26 20:39:30.475896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.949 [2024-11-26 20:39:30.475903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.949 [2024-11-26 20:39:30.492357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:15.949 [2024-11-26 20:39:30.492385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.949 [2024-11-26 20:39:30.492392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.508967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.508997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.509006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.525541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.525567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.525574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.542072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.542098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.542105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.565746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.565773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.565781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.582253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.582278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.582286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.598802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.598828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.598835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.615305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.615332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.615339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.631860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.631889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.631896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.648419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.648445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.648452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.664985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.665012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.665019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.681635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.681663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.681670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.698211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.698239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.698248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.714904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.714929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.714937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.731516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.731659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.731722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.207 [2024-11-26 20:39:30.748314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.207 [2024-11-26 20:39:30.748421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.207 [2024-11-26 20:39:30.748474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.765015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.765117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.765169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.781768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.781869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.781920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.798432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.798533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.798585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.815177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.815280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.815331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.831909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.832014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.832490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.849110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.849226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.849282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.865820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.865924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.865976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.882502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.882629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.882684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.899201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.899310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.899362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.915943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.916051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.916102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.932628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.932729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.484 [2024-11-26 20:39:30.932783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.484 [2024-11-26 20:39:30.949291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.484 [2024-11-26 20:39:30.949392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.485 [2024-11-26 20:39:30.949443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.485 [2024-11-26 20:39:30.966006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.485 [2024-11-26 20:39:30.966104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.485 [2024-11-26 20:39:30.966155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.485 [2024-11-26 20:39:30.982688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.485 [2024-11-26 20:39:30.982787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.485 [2024-11-26 20:39:30.982841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.485 [2024-11-26 20:39:30.999326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.485 [2024-11-26 20:39:30.999428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.485 [2024-11-26 20:39:30.999480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.485 [2024-11-26 20:39:31.016042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.485 [2024-11-26 20:39:31.016147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.485 [2024-11-26 20:39:31.016199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.485 [2024-11-26 20:39:31.032736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.485 [2024-11-26 20:39:31.032836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.485 [2024-11-26 20:39:31.032887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.049397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.049498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.049550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.066099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.066234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.066288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.082860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.082972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.083024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.099537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.099647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.099657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.116169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.116196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.116203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.132787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.132888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.132930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.149476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.149506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.149515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.166034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.166061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.166068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.182607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.182638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.182646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.199118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.199144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.199151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.215623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.215650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.743 [2024-11-26 20:39:31.215657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.743 [2024-11-26 20:39:31.232209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.743 [2024-11-26 20:39:31.232237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.744 [2024-11-26 20:39:31.232243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.744 [2024-11-26 20:39:31.248741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.744 [2024-11-26 20:39:31.248767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.744 [2024-11-26 20:39:31.248773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.744 [2024-11-26 20:39:31.265260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.744 [2024-11-26 20:39:31.265285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.744 [2024-11-26 20:39:31.265292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.744 [2024-11-26 20:39:31.281797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:16.744 [2024-11-26 20:39:31.281822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.744 [2024-11-26 20:39:31.281829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.298334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.298368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.298376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.314995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.315025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.315032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.331537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.331653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.331662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.348157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.348184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.348191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.364701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.364727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.364733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.381248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.381276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.381285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.397781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.397807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.397814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.414293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.414319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.414326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.430833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.430858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.430865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.447348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.447373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.447380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 [2024-11-26 20:39:31.465429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc98fb0) 00:16:17.002 [2024-11-26 20:39:31.465455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.002 [2024-11-26 20:39:31.465462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.002 14991.00 IOPS, 58.56 MiB/s 00:16:17.002 Latency(us) 00:16:17.002 [2024-11-26T20:39:31.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.002 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:17.003 nvme0n1 : 2.01 15037.41 58.74 0.00 0.00 8507.97 7713.08 32062.23 00:16:17.003 [2024-11-26T20:39:31.558Z] =================================================================================================================== 00:16:17.003 [2024-11-26T20:39:31.558Z] Total : 15037.41 58.74 0.00 0.00 8507.97 7713.08 32062.23 00:16:17.003 { 00:16:17.003 "results": [ 00:16:17.003 { 00:16:17.003 "job": "nvme0n1", 00:16:17.003 "core_mask": "0x2", 00:16:17.003 "workload": "randread", 00:16:17.003 "status": "finished", 00:16:17.003 "queue_depth": 128, 00:16:17.003 "io_size": 4096, 00:16:17.003 "runtime": 2.010718, 00:16:17.003 "iops": 15037.41449571745, 00:16:17.003 "mibps": 58.73990037389629, 00:16:17.003 "io_failed": 0, 00:16:17.003 "io_timeout": 0, 00:16:17.003 "avg_latency_us": 8507.971426419856, 00:16:17.003 "min_latency_us": 7713.083076923077, 00:16:17.003 "max_latency_us": 32062.227692307693 00:16:17.003 } 00:16:17.003 ], 00:16:17.003 "core_count": 1 00:16:17.003 } 00:16:17.003 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:17.003 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:17.003 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:17.003 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:17.003 | .driver_specific 00:16:17.003 | .nvme_error 00:16:17.003 | .status_code 00:16:17.003 | .command_transient_transport_error' 00:16:17.260 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:16:17.260 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79312 00:16:17.260 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79312 ']' 00:16:17.260 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79312 00:16:17.260 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:17.260 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.260 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79312 00:16:17.261 killing process with pid 79312 00:16:17.261 Received shutdown signal, test time was about 2.000000 seconds 00:16:17.261 00:16:17.261 Latency(us) 00:16:17.261 [2024-11-26T20:39:31.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.261 [2024-11-26T20:39:31.816Z] =================================================================================================================== 00:16:17.261 [2024-11-26T20:39:31.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.261 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:17.261 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:17.261 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79312' 00:16:17.261 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79312 00:16:17.261 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79312 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:17.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79371 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79371 /var/tmp/bperf.sock 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79371 ']' 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.518 20:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:17.518 [2024-11-26 20:39:31.872804] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:17.519 [2024-11-26 20:39:31.873367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79371 ] 00:16:17.519 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:17.519 Zero copy mechanism will not be used. 00:16:17.519 [2024-11-26 20:39:32.007761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.519 [2024-11-26 20:39:32.044539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.777 [2024-11-26 20:39:32.075525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:18.343 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.343 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:18.343 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:18.343 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:18.601 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:18.601 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.601 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:18.601 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.601 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:18.601 20:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:18.859 nvme0n1 00:16:18.859 20:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:18.859 20:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.859 20:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:18.859 20:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.859 20:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:18.859 20:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:18.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:18.859 Zero copy mechanism will not be used. 00:16:18.859 Running I/O for 2 seconds... 00:16:18.859 [2024-11-26 20:39:33.342854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.859 [2024-11-26 20:39:33.342897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.859 [2024-11-26 20:39:33.342908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.859 [2024-11-26 20:39:33.346429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.859 [2024-11-26 20:39:33.346462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.859 [2024-11-26 20:39:33.346470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.859 [2024-11-26 20:39:33.349875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.859 [2024-11-26 20:39:33.349902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.859 [2024-11-26 20:39:33.349910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.859 [2024-11-26 20:39:33.353354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.353471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.353481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.356902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.357004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.360447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.360548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.360558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.363982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.364082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.364091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.367501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.367607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.367617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.371007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.371092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.371101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.374811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.374962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.375022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.378649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.378757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.378813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.382324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.382427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.382477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.386054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.386159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.386210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.389663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.389762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.389811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.393366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.393464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.393514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.397187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.397290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.397356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.400880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.400982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.401031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.404460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.404564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.404630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.408058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.408161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.408209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.860 [2024-11-26 20:39:33.411596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:18.860 [2024-11-26 20:39:33.411697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.860 [2024-11-26 20:39:33.411746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.415222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.415324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.415373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.418825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.418928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.418977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.422440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.422543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.422602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.426055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.426158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.426210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.429697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.429794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.429859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.433256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.433354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.433402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.436808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.436909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.436958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.440341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.440444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.440493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.443976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.444082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.444131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.447603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.447703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.447752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.451193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.451294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.451344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.454775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.454874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.454924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.458403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.458506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.458555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.462007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.462099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.462108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.465529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.465628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.465637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.469045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.469134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.469143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.472643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.472669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.472676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.476060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.476157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.476166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.479599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.479625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.479632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.483369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.483465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.483475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.486963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.486987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.486994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.490429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.490457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.490464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.493893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.493920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.493927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.497296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.121 [2024-11-26 20:39:33.497322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.121 [2024-11-26 20:39:33.497329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.121 [2024-11-26 20:39:33.500744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.500770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.500777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.504228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.504256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.504264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.507685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.507714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.507722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.511132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.511236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.511245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.514723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.514750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.514757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.518125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.518220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.518229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.521964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.521999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.522007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.525433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.525525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.525533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.528939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.529043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.532480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.532572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.532581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.536217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.536313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.536324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.539818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.539844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.539852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.543246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.543341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.543350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.546815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.546841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.546849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.550247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.550340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.550349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.553744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.553770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.553777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.557215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.557306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.557315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.560802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.560831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.560838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.564206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.564299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.564307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.567706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.567734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.567740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.571144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.571237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.571246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.574652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.574678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.574685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.578080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.578171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.578180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.581569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.581676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.581685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.585124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.585215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.585224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.588565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.588667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.588677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.592106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.592196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.122 [2024-11-26 20:39:33.592205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.122 [2024-11-26 20:39:33.595654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.122 [2024-11-26 20:39:33.595681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.595687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.599058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.599151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.599159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.602905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.602932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.602939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.606316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.606413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.606422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.609873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.609971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.613388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.613480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.613489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.616853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.616947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.616956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.620350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.620439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.620448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.624145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.624174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.624181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.627885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.627912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.627921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.631471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.631567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.631575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.634949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.635042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.635052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.638550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.638657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.638716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.642155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.642258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.642307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.645748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.645847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.645896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.649326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.649424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.649474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.653401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.653500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.653550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.656995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.657095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.657144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.660679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.660777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.660827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.664281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.664384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.664434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.667873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.668025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.123 [2024-11-26 20:39:33.671460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.123 [2024-11-26 20:39:33.671564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.123 [2024-11-26 20:39:33.671629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.441 [2024-11-26 20:39:33.675358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.441 [2024-11-26 20:39:33.675461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.441 [2024-11-26 20:39:33.675511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.441 [2024-11-26 20:39:33.679052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.441 [2024-11-26 20:39:33.679152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.441 [2024-11-26 20:39:33.679202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.441 [2024-11-26 20:39:33.682674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.441 [2024-11-26 20:39:33.682774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.682823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.686233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.686334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.686384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.689949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.690059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.690109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.693557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.693671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.693721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.697145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.697245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.697309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.701233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.701345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.701357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.705328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.705358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.705366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.708790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.708819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.708826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.712414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.712517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.712528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.715965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.716059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.716068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.719484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.719579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.719599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.723144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.723227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.723237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.726630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.726656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.726663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.730076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.730173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.730182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.733731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.733758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.733767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.737351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.737442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.737451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.740901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.740996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.741006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.744380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.744472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.744481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.748074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.748172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.748183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.751663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.751690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.751698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.755148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.755243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.755252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.758667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.758694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.758700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.762133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.762226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.762235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.765694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.765720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.765727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.769145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.769235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.769244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.772697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.772724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.772732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.776150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.776240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.776249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.779712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.779738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.779745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.783121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.783214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.783223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.786926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.786955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.786962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.790512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.790622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.790631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.794039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.794142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.797755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.797782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.797791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.801227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.801254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.801261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.804953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.804980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.804987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.808423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.808518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.808527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.811924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.812015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.812024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.815416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.815510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.815519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.818967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.818991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.818999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.442 [2024-11-26 20:39:33.822556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.442 [2024-11-26 20:39:33.822584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.442 [2024-11-26 20:39:33.822605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.826047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.826072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.826079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.829618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.829641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.829648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.833070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.833097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.833103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.836567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.836603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.836610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.840094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.840120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.840127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.843853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.843881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.843888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.847347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.847444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.847453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.850928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.851023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.851032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.854465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.854558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.854567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.857966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.858076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.858085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.861478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.861573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.861582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.865024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.865118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.865128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.868462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.868554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.868563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.871966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.872060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.872069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.875503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.875612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.875621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.879044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.879138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.879147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.882569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.882670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.882710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.886119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.886222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.886275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.889693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.889795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.889850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.893293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.893394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.893509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.897048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.897153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.897206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.900636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.900735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.900812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.904276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.904379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.904438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.907855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.907958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.908010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.911453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.911558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.911630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.915101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.915202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.915256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.918788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.918891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.919313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.926833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.926964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.927098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.930853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.930974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.931074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.934781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.934899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.934996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.938714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.938821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.938876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.942353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.942459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.942514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.946021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.946115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.946125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.949530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.949635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.949645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.953050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.953147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.953156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.956560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.956673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.956682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.960108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.960204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.960213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.963605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.963632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.963639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.443 [2024-11-26 20:39:33.967085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.443 [2024-11-26 20:39:33.967182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.443 [2024-11-26 20:39:33.967191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.444 [2024-11-26 20:39:33.970685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.444 [2024-11-26 20:39:33.970731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.444 [2024-11-26 20:39:33.970806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.444 [2024-11-26 20:39:33.974246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.444 [2024-11-26 20:39:33.974354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.444 [2024-11-26 20:39:33.974411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.444 [2024-11-26 20:39:33.977962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.444 [2024-11-26 20:39:33.978080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.444 [2024-11-26 20:39:33.978151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.444 [2024-11-26 20:39:33.981527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.444 [2024-11-26 20:39:33.981645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.444 [2024-11-26 20:39:33.981706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.444 [2024-11-26 20:39:33.985130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.444 [2024-11-26 20:39:33.985235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.444 [2024-11-26 20:39:33.985310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.444 [2024-11-26 20:39:33.988792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.444 [2024-11-26 20:39:33.988898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.444 [2024-11-26 20:39:33.988958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.444 [2024-11-26 20:39:33.992398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.444 [2024-11-26 20:39:33.992502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.444 [2024-11-26 20:39:33.992556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.704 [2024-11-26 20:39:33.996059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.704 [2024-11-26 20:39:33.996164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.704 [2024-11-26 20:39:33.996219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.704 [2024-11-26 20:39:33.999694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.704 [2024-11-26 20:39:33.999798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.704 [2024-11-26 20:39:33.999850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.704 [2024-11-26 20:39:34.003299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.704 [2024-11-26 20:39:34.003405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.704 [2024-11-26 20:39:34.003457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.704 [2024-11-26 20:39:34.006942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.704 [2024-11-26 20:39:34.007047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.007103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.010614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.010717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.010810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.014276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.014383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.014440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.017928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.018040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.021609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.021712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.021768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.025257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.025360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.025415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.028833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.028937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.029012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.032499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.032616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.032670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.036166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.036271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.036325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.039785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.039878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.039887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.043327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.043357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.043364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.046876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.046905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.046912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.050318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.050450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.050459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.053854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.053950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.053960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.057411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.057437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.057444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.060966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.061073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.061173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.064646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.064751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.064853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.068323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.068434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.068530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.072017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.072122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.072177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.075598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.075698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.075792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.079319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.079477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.083008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.083115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.083176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.086634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.086744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.086802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.090226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.090329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.090430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.093904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.094027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.094079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.097519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.097638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.097707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.101219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.705 [2024-11-26 20:39:34.101323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.705 [2024-11-26 20:39:34.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.705 [2024-11-26 20:39:34.104807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.104907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.104988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.108427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.108530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.108597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.112122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.112226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.112280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.115778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.115880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.115976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.119483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.119595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.119652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.123077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.123179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.123240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.126721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.126826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.126899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.130356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.130449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.130458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.133907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.134008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.134017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.137465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.137561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.137570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.141004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.141101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.141110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.144504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.144615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.144624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.148030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.148123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.148132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.151552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.151661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.151671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.155109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.155207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.155216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.158663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.158691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.158698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.162146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.162246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.162255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.165689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.165716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.165724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.169132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.169230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.169239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.172665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.172693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.172700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.176100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.176199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.176208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.179663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.179697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.183074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.183172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.183181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.186716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.186765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.186902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.190346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.190454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.190511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.194019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.194126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.194200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.706 [2024-11-26 20:39:34.197667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.706 [2024-11-26 20:39:34.197769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.706 [2024-11-26 20:39:34.197824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.201308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.201411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.201465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.204909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.205015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.205108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.208644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.208749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.208803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.212290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.212394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.212447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.215941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.216046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.216144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.219692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.219796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.219884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.223415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.223523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.223575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.227078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.227183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.227277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.230773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.230878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.230933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.234461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.234569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.234663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.238181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.238287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.238341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.241842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.241945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.242006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.245484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.245598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.245654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.249108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.249204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.249213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.707 [2024-11-26 20:39:34.252660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.707 [2024-11-26 20:39:34.252688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.707 [2024-11-26 20:39:34.252695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.256153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.256252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.967 [2024-11-26 20:39:34.256261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.259679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.259708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.967 [2024-11-26 20:39:34.259715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.263119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.263216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.967 [2024-11-26 20:39:34.263225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.266739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.266821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.967 [2024-11-26 20:39:34.266876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.270336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.270440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.967 [2024-11-26 20:39:34.270495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.273976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.274090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.967 [2024-11-26 20:39:34.274151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.277583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.277695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.967 [2024-11-26 20:39:34.277746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.967 [2024-11-26 20:39:34.281166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.967 [2024-11-26 20:39:34.281271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.281324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.284769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.284874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.284925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.288423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.288528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.288581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.292100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.292203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.292257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.299024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.299292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.299438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.305703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.305927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.306229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.309868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.309972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.310033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.313455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.313557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.313654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.317115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.317220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.317274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.320838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.320944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.320987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.324336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.324433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.324442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.327962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.327992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.328000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.331415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.331514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.331523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.334950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.335046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.335055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.338477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.338570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.338579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.968 8509.00 IOPS, 1063.62 MiB/s [2024-11-26T20:39:34.523Z] [2024-11-26 20:39:34.342201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.342235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.342247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.344713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.344745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.344753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.347236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.347264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.347272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.349623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.349650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.349658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.351985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.352015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.352022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.354479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.354607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.354617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.357124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.357160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.357168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.359529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.359558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.359566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.362817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.362852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.362861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.364978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.365009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.365016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.367971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.368001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.368009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.370332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.370449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.968 [2024-11-26 20:39:34.370459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.968 [2024-11-26 20:39:34.373534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.968 [2024-11-26 20:39:34.373565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.373572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.375912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.375941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.375948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.379330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.379360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.379368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.382878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.382907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.382914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.386334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.386445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.386454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.389872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.389974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.390003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.393425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.393529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.393538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.396975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.397073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.397082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.400531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.400649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.400658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.404063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.404167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.404177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.407627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.407655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.407662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.411049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.411153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.411163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.414575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.414679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.414714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.418159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.418264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.418318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.421788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.421888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.421942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.425395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.425495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.425564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.429052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.429157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.429211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.432627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.432727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.432779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.436210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.436313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.436365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.439764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.439866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.439920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.443319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.443423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.443479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.447006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.447109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.447165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.450665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.450766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.450820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.454319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.454424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.454477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.458012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.458116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.458169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.461580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.461691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.969 [2024-11-26 20:39:34.461744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.969 [2024-11-26 20:39:34.465197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.969 [2024-11-26 20:39:34.465301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.465377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.468957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.469065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.469119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.472563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.472684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.472738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.481288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.481639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.481686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.485513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.485627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.485638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.489015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.489116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.489125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.492539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.492654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.492664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.496184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.496279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.496289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.499795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.499888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.499944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.503515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.503637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.503715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.507167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.507271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.507326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.510815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.510919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.510978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.514482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.514584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.515025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.970 [2024-11-26 20:39:34.518621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:19.970 [2024-11-26 20:39:34.518734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.970 [2024-11-26 20:39:34.518792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.230 [2024-11-26 20:39:34.522260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.230 [2024-11-26 20:39:34.522366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.230 [2024-11-26 20:39:34.522421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.230 [2024-11-26 20:39:34.525858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.230 [2024-11-26 20:39:34.525960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.230 [2024-11-26 20:39:34.526018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.230 [2024-11-26 20:39:34.529495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.230 [2024-11-26 20:39:34.529605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.230 [2024-11-26 20:39:34.529662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.230 [2024-11-26 20:39:34.533188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.230 [2024-11-26 20:39:34.533290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.230 [2024-11-26 20:39:34.533403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.230 [2024-11-26 20:39:34.536869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.230 [2024-11-26 20:39:34.536975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.230 [2024-11-26 20:39:34.537030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.540536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.540659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.540722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.544146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.544249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.544335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.547838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.547942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.547996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.551492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.551585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.551609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.554942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.555040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.555049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.558464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.558559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.558568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.562004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.562095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.562104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.565520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.565622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.565631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.569064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.569159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.569168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.572578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.572684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.576065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.576159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.576168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.579581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.579711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.579764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.583180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.583285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.583339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.586852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.586955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.587010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.590554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.590671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.590792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.594299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.594436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.594491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.598118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.598223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.598275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.601861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.601976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.602123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.605573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.605696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.605752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.609261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.609356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.609366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.612832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.612859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.612866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.616265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.616362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.616371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.620053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.620195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.620212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.624281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.624420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.624535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.628853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.629001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.629084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.632752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.632866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.632920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.231 [2024-11-26 20:39:34.636387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.231 [2024-11-26 20:39:34.636494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.231 [2024-11-26 20:39:34.636546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.640080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.640188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.640240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.643746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.643853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.643907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.647379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.647487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.647541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.651027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.651142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.651200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.655073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.655199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.655261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.658752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.658860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.658919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.662431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.662540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.662610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.666118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.666227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.666280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.669803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.669915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.669974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.673614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.673715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.673779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.677212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.677307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.677317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.680747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.680778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.680785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.684258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.684358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.684368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.687859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.687957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.688035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.691632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.691736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.691838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.695266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.695373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.695464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.698981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.699086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.699189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.702706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.702810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.702872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.706415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.706519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.706598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.710214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.710320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.710377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.713893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.714011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.714087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.717552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.717680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.717793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.721232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.721337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.721402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.724968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.725072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.725130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.728610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.728711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.728762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.732233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.732346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.732402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.735870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.232 [2024-11-26 20:39:34.735975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.232 [2024-11-26 20:39:34.736029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.232 [2024-11-26 20:39:34.739528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.739649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.739705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.743141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.743254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.743349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.746805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.746899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.746950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.750411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.750516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.750596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.754083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.754188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.754246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.757749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.757868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.757922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.761422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.761525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.761611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.765069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.765170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.765225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.768775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.768880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.768944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.772441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.772546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.772636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.776146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.776250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.776305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.233 [2024-11-26 20:39:34.779821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.233 [2024-11-26 20:39:34.779925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.233 [2024-11-26 20:39:34.779978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.493 [2024-11-26 20:39:34.783494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.783607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.783675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.787171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.787274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.787363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.790898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.791001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.791079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.794539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.794661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.794715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.798213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.798318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.798372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.801869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.801972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.802097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.805612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.805712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.805764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.809266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.809362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.809371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.812826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.812857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.812865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.816270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.816300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.816307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.819754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.819781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.819789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.823195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.823299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.823309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.826829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.826936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.826995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.830497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.830617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.830675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.834161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.834266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.834338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.838044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.838146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.838199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.841735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.841839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.841891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.845399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.845503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.845556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.849009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.849112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.849180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.852675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.852777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.852830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.856356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.856461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.856514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.859966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.860072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.860139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.863552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.863670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.863793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.867283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.867390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.867444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.870945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.871053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.871113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.874679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.874781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.494 [2024-11-26 20:39:34.874843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.494 [2024-11-26 20:39:34.878410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.494 [2024-11-26 20:39:34.878520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.878577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.882096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.882202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.882256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.885745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.885848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.885890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.889317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.889411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.889420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.892926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.893024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.893033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.896436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.896532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.896541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.900102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.900197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.900206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.903656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.903686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.903697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.907111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.907211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.907220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.910684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.910712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.910719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.914111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.914207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.914216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.917659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.917686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.917694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.921119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.921215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.921224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.924603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.924629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.924636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.928072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.928169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.928178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.931585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.931724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.931781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.935245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.935351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.935405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.938928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.939033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.939089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.942642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.942744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.942804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.946227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.946335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.946444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.949877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.949989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.950109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.953569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.953691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.953761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.957193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.957295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.957346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.960844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.960950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.961003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.964474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.964579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.964656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.968140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.968245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.968335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.971806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.971912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.495 [2024-11-26 20:39:34.971963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.495 [2024-11-26 20:39:34.975464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.495 [2024-11-26 20:39:34.975572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:34.975648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:34.979201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:34.979319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:34.979378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:34.982894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:34.983000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:34.983056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:34.986604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:34.986706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:34.986760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:34.990236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:34.990340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:34.990392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:34.993971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:34.994082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:34.994133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:34.997601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:34.997701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:34.997753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.001304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.001411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.004800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.004829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.004836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.008291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.008395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.008404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.011899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.011996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.012050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.015555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.015672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.015726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.019139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.019243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.019300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.022830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.022935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.022989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.026513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.026628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.026683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.030149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.030257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.030318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.033770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.033874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.033928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.037513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.037628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.037687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.041138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.041242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.041310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.496 [2024-11-26 20:39:35.044803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.496 [2024-11-26 20:39:35.044915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.496 [2024-11-26 20:39:35.044971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.757 [2024-11-26 20:39:35.048405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.757 [2024-11-26 20:39:35.048509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.757 [2024-11-26 20:39:35.048568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.757 [2024-11-26 20:39:35.052036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.757 [2024-11-26 20:39:35.052149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.757 [2024-11-26 20:39:35.052205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.757 [2024-11-26 20:39:35.055655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.757 [2024-11-26 20:39:35.055758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.757 [2024-11-26 20:39:35.055832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.757 [2024-11-26 20:39:35.059319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.757 [2024-11-26 20:39:35.059422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.757 [2024-11-26 20:39:35.059475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.757 [2024-11-26 20:39:35.063002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.757 [2024-11-26 20:39:35.063112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.757 [2024-11-26 20:39:35.063163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.757 [2024-11-26 20:39:35.066628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.757 [2024-11-26 20:39:35.066730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.066805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.070350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.070453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.070536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.074014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.074108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.074117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.077535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.077642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.077652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.081102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.081195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.081205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.084660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.084686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.084693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.088115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.088215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.088227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.091694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.091722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.091730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.095125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.095228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.095238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.098694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.098722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.098729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.102176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.102277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.102289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.105724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.105759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.109187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.109285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.109294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.112740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.112769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.112776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.116214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.116311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.116336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.119808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.119836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.119843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.123261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.123359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.123368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.126820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.126849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.126856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.130278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.130373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.130383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.133810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.133844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.133851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.137293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.137391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.137400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.140874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.140903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.140910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.144341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.144443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.144452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.147935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.148030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.148040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.151531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.151638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.151647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.155143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.155236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.155245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.158685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.158760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.758 [2024-11-26 20:39:35.158818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.758 [2024-11-26 20:39:35.162296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.758 [2024-11-26 20:39:35.162400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.162452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.165919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.166031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.166085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.169577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.169686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.169738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.173197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.173298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.173350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.176903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.177003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.177055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.180496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.180605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.180661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.184158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.184259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.184312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.187830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.187934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.187990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.191446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.191555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.191630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.195190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.195292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.195345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.198800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.198903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.198956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.202463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.202578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.202670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.206175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.206277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.206332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.209834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.209936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.209997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.213444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.213548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.213619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.217121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.217225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.217291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.220783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.220886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.220994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.224420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.224522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.224583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.228078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.228183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.228256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.231736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.231837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.235259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.235352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.238669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.238696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.238703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.241866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.241900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.241907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.245105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.245133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.245141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.248321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.248350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.248357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.251618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.251645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.251652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.254855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.759 [2024-11-26 20:39:35.254884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.759 [2024-11-26 20:39:35.254891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.759 [2024-11-26 20:39:35.258095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.258124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.258131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.261311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.261415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.261424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.264644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.264741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.264750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.267992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.268090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.268100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.271484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.271602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.271612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.274838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.274933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.274942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.278191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.278219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.278227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.281421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.281449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.281456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.284680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.284706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.284714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.287879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.287907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.287914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.291121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.291150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.291157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.294399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.294498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.294507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.297855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.297948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.297957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.301153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.301182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.301189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.304364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.304393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.304400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.760 [2024-11-26 20:39:35.307584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:20.760 [2024-11-26 20:39:35.307621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.760 [2024-11-26 20:39:35.307628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.018 [2024-11-26 20:39:35.310829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.018 [2024-11-26 20:39:35.310857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.018 [2024-11-26 20:39:35.310864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:21.018 [2024-11-26 20:39:35.314084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.018 [2024-11-26 20:39:35.314111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.018 [2024-11-26 20:39:35.314118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:21.018 [2024-11-26 20:39:35.317397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.018 [2024-11-26 20:39:35.317501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.018 [2024-11-26 20:39:35.317510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:21.018 [2024-11-26 20:39:35.320702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.018 [2024-11-26 20:39:35.320730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.018 [2024-11-26 20:39:35.320737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.018 [2024-11-26 20:39:35.323943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.018 [2024-11-26 20:39:35.323972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.018 [2024-11-26 20:39:35.323979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:21.018 [2024-11-26 20:39:35.327180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.019 [2024-11-26 20:39:35.327209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.019 [2024-11-26 20:39:35.327217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:21.019 [2024-11-26 20:39:35.330394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.019 [2024-11-26 20:39:35.330423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.019 [2024-11-26 20:39:35.330430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:21.019 [2024-11-26 20:39:35.333560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.019 [2024-11-26 20:39:35.333600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.019 [2024-11-26 20:39:35.333608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.019 [2024-11-26 20:39:35.336851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.019 [2024-11-26 20:39:35.336878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.019 [2024-11-26 20:39:35.336885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:21.019 8600.50 IOPS, 1075.06 MiB/s [2024-11-26T20:39:35.574Z] [2024-11-26 20:39:35.341308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a69b0) 00:16:21.019 [2024-11-26 20:39:35.341336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.019 [2024-11-26 20:39:35.341343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:21.019 00:16:21.019 Latency(us) 00:16:21.019 [2024-11-26T20:39:35.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.019 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:21.019 nvme0n1 : 2.00 8596.51 1074.56 0.00 0.00 1857.85 743.58 7108.14 00:16:21.019 [2024-11-26T20:39:35.574Z] =================================================================================================================== 00:16:21.019 [2024-11-26T20:39:35.574Z] Total : 8596.51 1074.56 0.00 0.00 1857.85 743.58 7108.14 00:16:21.019 { 00:16:21.019 "results": [ 00:16:21.019 { 00:16:21.019 "job": "nvme0n1", 00:16:21.019 "core_mask": "0x2", 00:16:21.019 "workload": "randread", 00:16:21.019 "status": "finished", 00:16:21.019 "queue_depth": 16, 00:16:21.019 "io_size": 131072, 00:16:21.019 "runtime": 2.002789, 00:16:21.019 "iops": 8596.512163787598, 00:16:21.019 "mibps": 1074.5640204734498, 00:16:21.019 "io_failed": 0, 00:16:21.019 "io_timeout": 0, 00:16:21.019 "avg_latency_us": 1857.8464011866624, 00:16:21.019 "min_latency_us": 743.5815384615385, 00:16:21.019 "max_latency_us": 7108.135384615384 00:16:21.019 } 00:16:21.019 ], 00:16:21.019 "core_count": 1 00:16:21.019 } 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:21.019 | .driver_specific 00:16:21.019 | .nvme_error 00:16:21.019 | .status_code 00:16:21.019 | .command_transient_transport_error' 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 556 > 0 )) 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79371 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79371 ']' 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79371 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.019 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79371 00:16:21.277 killing process with pid 79371 00:16:21.277 Received shutdown signal, test time was about 2.000000 seconds 00:16:21.277 00:16:21.277 Latency(us) 00:16:21.277 [2024-11-26T20:39:35.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.277 [2024-11-26T20:39:35.832Z] =================================================================================================================== 00:16:21.277 [2024-11-26T20:39:35.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79371' 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79371 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79371 00:16:21.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79422 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79422 /var/tmp/bperf.sock 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79422 ']' 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.277 20:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:21.277 [2024-11-26 20:39:35.741553] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:21.277 [2024-11-26 20:39:35.741762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79422 ] 00:16:21.534 [2024-11-26 20:39:35.880793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.535 [2024-11-26 20:39:35.917412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.535 [2024-11-26 20:39:35.949315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:22.099 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.099 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:22.099 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:22.099 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:22.356 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:22.356 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.356 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:22.356 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.356 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.356 20:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.613 nvme0n1 00:16:22.613 20:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:22.613 20:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.613 20:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:22.613 20:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.613 20:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:22.613 20:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:22.872 Running I/O for 2 seconds... 00:16:22.872 [2024-11-26 20:39:37.236569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc128 00:16:22.872 [2024-11-26 20:39:37.237709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.237746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.250299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc998 00:16:22.872 [2024-11-26 20:39:37.251382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.251498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.264007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efd208 00:16:22.872 [2024-11-26 20:39:37.265178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.265204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.277728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efda78 00:16:22.872 [2024-11-26 20:39:37.278783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.278808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.291339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efe2e8 00:16:22.872 [2024-11-26 20:39:37.292455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.292476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.305056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efeb58 00:16:22.872 [2024-11-26 20:39:37.306098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.306207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.324541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efef90 00:16:22.872 [2024-11-26 20:39:37.326563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.326599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.338212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efeb58 00:16:22.872 [2024-11-26 20:39:37.340275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.340297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.351908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efe2e8 00:16:22.872 [2024-11-26 20:39:37.353883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.353907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.365559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efda78 00:16:22.872 [2024-11-26 20:39:37.367629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.367654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.379283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efd208 00:16:22.872 [2024-11-26 20:39:37.381210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.381309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.393025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc998 00:16:22.872 [2024-11-26 20:39:37.395017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.395039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.406697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc128 00:16:22.872 [2024-11-26 20:39:37.408595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.408619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:22.872 [2024-11-26 20:39:37.420280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efb8b8 00:16:22.872 [2024-11-26 20:39:37.422246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.872 [2024-11-26 20:39:37.422269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:23.157 [2024-11-26 20:39:37.433946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efb048 00:16:23.157 [2024-11-26 20:39:37.435831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.157 [2024-11-26 20:39:37.435924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:23.157 [2024-11-26 20:39:37.447640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efa7d8 00:16:23.157 [2024-11-26 20:39:37.449481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.157 [2024-11-26 20:39:37.449505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:23.157 [2024-11-26 20:39:37.461225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef9f68 00:16:23.157 [2024-11-26 20:39:37.463086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.157 [2024-11-26 20:39:37.463182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.157 [2024-11-26 20:39:37.474916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef96f8 00:16:23.157 [2024-11-26 20:39:37.476805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.157 [2024-11-26 20:39:37.476826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:23.157 [2024-11-26 20:39:37.488563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef8e88 00:16:23.157 [2024-11-26 20:39:37.490401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.157 [2024-11-26 20:39:37.490496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:23.157 [2024-11-26 20:39:37.502300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef8618 00:16:23.157 [2024-11-26 20:39:37.504154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.157 [2024-11-26 20:39:37.504174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.515935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef7da8 00:16:23.158 [2024-11-26 20:39:37.517714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.517739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.529538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef7538 00:16:23.158 [2024-11-26 20:39:37.531386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.531407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.543267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef6cc8 00:16:23.158 [2024-11-26 20:39:37.545005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.545097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.556935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef6458 00:16:23.158 [2024-11-26 20:39:37.558739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.558759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.570581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef5be8 00:16:23.158 [2024-11-26 20:39:37.572293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.572384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.584262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef5378 00:16:23.158 [2024-11-26 20:39:37.586040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.586064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.597931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef4b08 00:16:23.158 [2024-11-26 20:39:37.599625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.599649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.611524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef4298 00:16:23.158 [2024-11-26 20:39:37.613262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.613286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.625194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef3a28 00:16:23.158 [2024-11-26 20:39:37.626858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.626956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.638894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef31b8 00:16:23.158 [2024-11-26 20:39:37.640600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.640624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.652548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef2948 00:16:23.158 [2024-11-26 20:39:37.654180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.654270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.666222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef20d8 00:16:23.158 [2024-11-26 20:39:37.667900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.668003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:23.158 [2024-11-26 20:39:37.680026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef1868 00:16:23.158 [2024-11-26 20:39:37.681687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.158 [2024-11-26 20:39:37.681789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.693847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef0ff8 00:16:23.442 [2024-11-26 20:39:37.695493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.695599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.707652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef0788 00:16:23.442 [2024-11-26 20:39:37.709274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.709374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.721424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeff18 00:16:23.442 [2024-11-26 20:39:37.723054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.723155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.735224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eef6a8 00:16:23.442 [2024-11-26 20:39:37.736821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.736921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.749057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeee38 00:16:23.442 [2024-11-26 20:39:37.750655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.750754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.762858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eee5c8 00:16:23.442 [2024-11-26 20:39:37.764420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.764519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.776667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eedd58 00:16:23.442 [2024-11-26 20:39:37.778225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.778327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.790479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eed4e8 00:16:23.442 [2024-11-26 20:39:37.792015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.792115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.804289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eecc78 00:16:23.442 [2024-11-26 20:39:37.805809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.805908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.818066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eec408 00:16:23.442 [2024-11-26 20:39:37.819563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.819680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.831920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eebb98 00:16:23.442 [2024-11-26 20:39:37.833403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.833502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.845709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeb328 00:16:23.442 [2024-11-26 20:39:37.847189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.847287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.859497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeaab8 00:16:23.442 [2024-11-26 20:39:37.860952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.861049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.873312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eea248 00:16:23.442 [2024-11-26 20:39:37.874764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.874861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.887136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee99d8 00:16:23.442 [2024-11-26 20:39:37.888560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.888670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.900953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee9168 00:16:23.442 [2024-11-26 20:39:37.902369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.902460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.914714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee88f8 00:16:23.442 [2024-11-26 20:39:37.916029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.916053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.928282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee8088 00:16:23.442 [2024-11-26 20:39:37.929602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.929626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.941886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee7818 00:16:23.442 [2024-11-26 20:39:37.943263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.943288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.955579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee6fa8 00:16:23.442 [2024-11-26 20:39:37.956868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.956959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.969263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee6738 00:16:23.442 [2024-11-26 20:39:37.970611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.970633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:23.442 [2024-11-26 20:39:37.982996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee5ec8 00:16:23.442 [2024-11-26 20:39:37.984250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.442 [2024-11-26 20:39:37.984340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:37.996685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee5658 00:16:23.701 [2024-11-26 20:39:37.997914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:37.997938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.010276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee4de8 00:16:23.701 [2024-11-26 20:39:38.011493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.011583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.023939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee4578 00:16:23.701 [2024-11-26 20:39:38.025208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.025232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.037629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee3d08 00:16:23.701 [2024-11-26 20:39:38.038816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.038908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.051279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee3498 00:16:23.701 [2024-11-26 20:39:38.052507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.052527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.064941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee2c28 00:16:23.701 [2024-11-26 20:39:38.066098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.066188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.078633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee23b8 00:16:23.701 [2024-11-26 20:39:38.079840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.079936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.092419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee1b48 00:16:23.701 [2024-11-26 20:39:38.093616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.093712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.106226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee12d8 00:16:23.701 [2024-11-26 20:39:38.107399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.107498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.120058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee0a68 00:16:23.701 [2024-11-26 20:39:38.121226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.121325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.133877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee01f8 00:16:23.701 [2024-11-26 20:39:38.135028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.135125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.147660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016edf988 00:16:23.701 [2024-11-26 20:39:38.148786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.148882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.161473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016edf118 00:16:23.701 [2024-11-26 20:39:38.162615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.162713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.175333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ede8a8 00:16:23.701 [2024-11-26 20:39:38.176438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.176537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.189140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ede038 00:16:23.701 [2024-11-26 20:39:38.190251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.190349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.208739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ede038 00:16:23.701 [2024-11-26 20:39:38.210796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.210894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.222528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ede8a8 00:16:23.701 18345.00 IOPS, 71.66 MiB/s [2024-11-26T20:39:38.256Z] [2024-11-26 20:39:38.224586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.224709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.236389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016edf118 00:16:23.701 [2024-11-26 20:39:38.238426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.238534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:23.701 [2024-11-26 20:39:38.250202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016edf988 00:16:23.701 [2024-11-26 20:39:38.252212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.701 [2024-11-26 20:39:38.252311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:23.959 [2024-11-26 20:39:38.264073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee01f8 00:16:23.960 [2024-11-26 20:39:38.266081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.266186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.278552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee0a68 00:16:23.960 [2024-11-26 20:39:38.280602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.280715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.292441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee12d8 00:16:23.960 [2024-11-26 20:39:38.294436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.294540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.306286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee1b48 00:16:23.960 [2024-11-26 20:39:38.308229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.308327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.320065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee23b8 00:16:23.960 [2024-11-26 20:39:38.322015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.322117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.334276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee2c28 00:16:23.960 [2024-11-26 20:39:38.336124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.336156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.347898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee3498 00:16:23.960 [2024-11-26 20:39:38.349733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.349758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.361484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee3d08 00:16:23.960 [2024-11-26 20:39:38.363315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.363462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.376162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee4578 00:16:23.960 [2024-11-26 20:39:38.377962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.390849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee4de8 00:16:23.960 [2024-11-26 20:39:38.392712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.392821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.405194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee5658 00:16:23.960 [2024-11-26 20:39:38.406978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.407009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.419661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee5ec8 00:16:23.960 [2024-11-26 20:39:38.421408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.421436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.434092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee6738 00:16:23.960 [2024-11-26 20:39:38.435919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.435942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.448630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee6fa8 00:16:23.960 [2024-11-26 20:39:38.450417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.450444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.462904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee7818 00:16:23.960 [2024-11-26 20:39:38.464613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.464641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.477244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee8088 00:16:23.960 [2024-11-26 20:39:38.478951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.478978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.491778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee88f8 00:16:23.960 [2024-11-26 20:39:38.493525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.493557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:23.960 [2024-11-26 20:39:38.505675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee9168 00:16:23.960 [2024-11-26 20:39:38.507352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.960 [2024-11-26 20:39:38.507387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:24.219 [2024-11-26 20:39:38.519411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ee99d8 00:16:24.219 [2024-11-26 20:39:38.521163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.219 [2024-11-26 20:39:38.521182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:24.219 [2024-11-26 20:39:38.532887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eea248 00:16:24.219 [2024-11-26 20:39:38.534541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.219 [2024-11-26 20:39:38.534563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:24.219 [2024-11-26 20:39:38.545775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeaab8 00:16:24.219 [2024-11-26 20:39:38.547502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.219 [2024-11-26 20:39:38.547527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:24.219 [2024-11-26 20:39:38.558770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeb328 00:16:24.219 [2024-11-26 20:39:38.560371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.219 [2024-11-26 20:39:38.560393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.571630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eebb98 00:16:24.220 [2024-11-26 20:39:38.573234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.573256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.584426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eec408 00:16:24.220 [2024-11-26 20:39:38.586025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.586046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.597449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eecc78 00:16:24.220 [2024-11-26 20:39:38.599115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.599135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.610525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eed4e8 00:16:24.220 [2024-11-26 20:39:38.612076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.612104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.623532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eedd58 00:16:24.220 [2024-11-26 20:39:38.625083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.625110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.636505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eee5c8 00:16:24.220 [2024-11-26 20:39:38.638119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.638137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.649504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeee38 00:16:24.220 [2024-11-26 20:39:38.651023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.651133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.662581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eef6a8 00:16:24.220 [2024-11-26 20:39:38.664066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.664087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.675697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016eeff18 00:16:24.220 [2024-11-26 20:39:38.677241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.677267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.688764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef0788 00:16:24.220 [2024-11-26 20:39:38.690232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.690327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.701761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef0ff8 00:16:24.220 [2024-11-26 20:39:38.703211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.703235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.714735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef1868 00:16:24.220 [2024-11-26 20:39:38.716264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.716292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.728151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef20d8 00:16:24.220 [2024-11-26 20:39:38.729571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.729683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.741652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef2948 00:16:24.220 [2024-11-26 20:39:38.743150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.743175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.754982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef31b8 00:16:24.220 [2024-11-26 20:39:38.756358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.756385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:24.220 [2024-11-26 20:39:38.768150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef3a28 00:16:24.220 [2024-11-26 20:39:38.769506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.220 [2024-11-26 20:39:38.769529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:24.479 [2024-11-26 20:39:38.781043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef4298 00:16:24.479 [2024-11-26 20:39:38.782483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.479 [2024-11-26 20:39:38.782508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:24.479 [2024-11-26 20:39:38.794306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef4b08 00:16:24.479 [2024-11-26 20:39:38.795633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.479 [2024-11-26 20:39:38.795725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:24.479 [2024-11-26 20:39:38.807544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef5378 00:16:24.479 [2024-11-26 20:39:38.808879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.479 [2024-11-26 20:39:38.808901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:24.479 [2024-11-26 20:39:38.820876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef5be8 00:16:24.479 [2024-11-26 20:39:38.822204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.479 [2024-11-26 20:39:38.822233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:24.479 [2024-11-26 20:39:38.833852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef6458 00:16:24.480 [2024-11-26 20:39:38.835154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.835244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.846899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef6cc8 00:16:24.480 [2024-11-26 20:39:38.848264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.848288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.859921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef7538 00:16:24.480 [2024-11-26 20:39:38.861167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.861194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.872785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef7da8 00:16:24.480 [2024-11-26 20:39:38.874018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.874042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.885424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef8618 00:16:24.480 [2024-11-26 20:39:38.886651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.886675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.898243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef8e88 00:16:24.480 [2024-11-26 20:39:38.899433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.899456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.911309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef96f8 00:16:24.480 [2024-11-26 20:39:38.912484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.912516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.924503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef9f68 00:16:24.480 [2024-11-26 20:39:38.925691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.925728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.937921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efa7d8 00:16:24.480 [2024-11-26 20:39:38.939082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.939115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.951199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efb048 00:16:24.480 [2024-11-26 20:39:38.952319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.952433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.964456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efb8b8 00:16:24.480 [2024-11-26 20:39:38.965667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.965799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.978007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc128 00:16:24.480 [2024-11-26 20:39:38.979186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.979317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:38.991661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc998 00:16:24.480 [2024-11-26 20:39:38.992822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:38.992955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:39.004920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efd208 00:16:24.480 [2024-11-26 20:39:39.006081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:39.006190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:39.018450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efda78 00:16:24.480 [2024-11-26 20:39:39.019577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.480 [2024-11-26 20:39:39.019688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:24.480 [2024-11-26 20:39:39.031981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efe2e8 00:16:24.480 [2024-11-26 20:39:39.033096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.033214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.047276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efeb58 00:16:24.740 [2024-11-26 20:39:39.048368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.048534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.067534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efef90 00:16:24.740 [2024-11-26 20:39:39.069615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.069745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.081524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efeb58 00:16:24.740 [2024-11-26 20:39:39.083598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.083634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.095308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efe2e8 00:16:24.740 [2024-11-26 20:39:39.097271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.097298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.108980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efda78 00:16:24.740 [2024-11-26 20:39:39.110936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.110962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.122597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efd208 00:16:24.740 [2024-11-26 20:39:39.124522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.124548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.136208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc998 00:16:24.740 [2024-11-26 20:39:39.138143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.138167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.149819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efc128 00:16:24.740 [2024-11-26 20:39:39.151808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.151833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.163491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efb8b8 00:16:24.740 [2024-11-26 20:39:39.165380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.165404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.177109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efb048 00:16:24.740 [2024-11-26 20:39:39.179059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.179081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.190791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016efa7d8 00:16:24.740 [2024-11-26 20:39:39.192657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.192682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.204393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef9f68 00:16:24.740 [2024-11-26 20:39:39.206309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.206331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.740 [2024-11-26 20:39:39.218100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a0ae0) with pdu=0x200016ef96f8 00:16:24.740 [2024-11-26 20:39:39.219916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.740 [2024-11-26 20:39:39.219943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:24.740 18533.50 IOPS, 72.40 MiB/s 00:16:24.740 Latency(us) 00:16:24.740 [2024-11-26T20:39:39.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.740 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.740 nvme0n1 : 2.01 18520.45 72.35 0.00 0.00 6905.23 5469.74 26416.05 00:16:24.740 [2024-11-26T20:39:39.295Z] =================================================================================================================== 00:16:24.740 [2024-11-26T20:39:39.295Z] Total : 18520.45 72.35 0.00 0.00 6905.23 5469.74 26416.05 00:16:24.740 { 00:16:24.740 "results": [ 00:16:24.740 { 00:16:24.740 "job": "nvme0n1", 00:16:24.740 "core_mask": "0x2", 00:16:24.740 "workload": "randwrite", 00:16:24.740 "status": "finished", 00:16:24.740 "queue_depth": 128, 00:16:24.740 "io_size": 4096, 00:16:24.740 "runtime": 2.008321, 00:16:24.740 "iops": 18520.445685724542, 00:16:24.740 "mibps": 72.3454909598615, 00:16:24.740 "io_failed": 0, 00:16:24.740 "io_timeout": 0, 00:16:24.740 "avg_latency_us": 6905.225337896945, 00:16:24.740 "min_latency_us": 5469.735384615385, 00:16:24.740 "max_latency_us": 26416.04923076923 00:16:24.740 } 00:16:24.740 ], 00:16:24.740 "core_count": 1 00:16:24.740 } 00:16:24.740 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:24.740 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:24.740 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:24.740 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:24.740 | .driver_specific 00:16:24.741 | .nvme_error 00:16:24.741 | .status_code 00:16:24.741 | .command_transient_transport_error' 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79422 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79422 ']' 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79422 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79422 00:16:24.999 killing process with pid 79422 00:16:24.999 Received shutdown signal, test time was about 2.000000 seconds 00:16:24.999 00:16:24.999 Latency(us) 00:16:24.999 [2024-11-26T20:39:39.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.999 [2024-11-26T20:39:39.554Z] =================================================================================================================== 00:16:24.999 [2024-11-26T20:39:39.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:24.999 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:25.000 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79422' 00:16:25.000 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79422 00:16:25.000 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79422 00:16:25.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79477 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79477 /var/tmp/bperf.sock 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79477 ']' 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.257 20:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 [2024-11-26 20:39:39.624063] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:25.257 [2024-11-26 20:39:39.624224] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79477 ] 00:16:25.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:25.257 Zero copy mechanism will not be used. 00:16:25.257 [2024-11-26 20:39:39.761929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.257 [2024-11-26 20:39:39.799070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.516 [2024-11-26 20:39:39.831190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:26.085 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.085 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:26.085 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:26.085 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:26.344 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:26.344 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.344 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:26.344 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.344 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:26.344 20:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:26.663 nvme0n1 00:16:26.663 20:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:26.663 20:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.663 20:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:26.663 20:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.663 20:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:26.663 20:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:26.663 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:26.663 Zero copy mechanism will not be used. 00:16:26.663 Running I/O for 2 seconds... 00:16:26.663 [2024-11-26 20:39:41.150052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:26.663 [2024-11-26 20:39:41.150129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.663 [2024-11-26 20:39:41.150154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:26.663 [2024-11-26 20:39:41.153472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:26.663 [2024-11-26 20:39:41.153541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.663 [2024-11-26 20:39:41.153557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:26.663 [2024-11-26 20:39:41.156640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:26.663 [2024-11-26 20:39:41.156737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.663 [2024-11-26 20:39:41.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:26.663 [2024-11-26 20:39:41.159810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:26.663 [2024-11-26 20:39:41.159882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.663 [2024-11-26 20:39:41.159897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:26.663 [2024-11-26 20:39:41.162972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:26.663 [2024-11-26 20:39:41.163121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.663 [2024-11-26 20:39:41.163135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.052 [2024-11-26 20:39:41.166251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.052 [2024-11-26 20:39:41.166322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.052 [2024-11-26 20:39:41.166337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.052 [2024-11-26 20:39:41.169416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.052 [2024-11-26 20:39:41.169487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.052 [2024-11-26 20:39:41.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.052 [2024-11-26 20:39:41.172572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.052 [2024-11-26 20:39:41.172663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.052 [2024-11-26 20:39:41.172677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.175730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.175801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.175815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.178897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.178967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.178982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.182068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.182177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.182191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.185284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.185354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.185368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.188435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.188498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.188512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.191582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.191686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.191706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.194738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.194813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.194827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.197840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.197916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.197930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.200968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.201085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.201099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.204202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.204266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.204280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.207357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.207431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.207445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.210533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.210619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.210634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.213646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.213752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.213773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.216775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.216851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.216865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.219912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.220033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.220047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.223140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.223214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.223228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.226282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.226341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.226355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.229398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.229469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.229483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.232552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.232634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.232649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.235727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.235787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.235801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.238862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.238937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.238951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.242009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.242087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.242101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.245162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.245224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.245238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.248310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.248373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.248387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.251461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.251551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.251566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.254646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.254708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.254722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.053 [2024-11-26 20:39:41.257755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.053 [2024-11-26 20:39:41.257825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.053 [2024-11-26 20:39:41.257839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.260909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.260984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.260998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.264058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.264121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.264136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.267184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.267259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.267272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.270333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.270405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.270419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.273474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.273584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.273611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.276740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.276811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.276825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.279898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.279968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.279983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.283041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.283129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.286174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.286244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.286258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.289303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.289394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.289415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.292430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.292551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.292565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.295668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.295738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.295758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.298811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.298871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.298885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.301947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.302032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.302046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.305067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.305159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.305179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.308201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.308260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.308273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.311361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.311471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.311486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.314576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.314673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.314687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.317732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.317807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.317821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.320879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.320949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.320963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.324029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.324088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.324102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.327161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.327235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.327249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.330292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.330399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.330413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.333485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.333575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.333604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.336625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.336717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.336737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.339732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.339802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.054 [2024-11-26 20:39:41.339816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.054 [2024-11-26 20:39:41.342845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.054 [2024-11-26 20:39:41.342908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.342922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.345941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.346071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.346085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.349179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.349242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.349256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.352311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.352382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.352396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.355453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.355526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.355540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.358601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.358684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.358698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.361756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.361831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.361845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.364892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.364994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.365008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.368106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.368182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.368202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.371244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.371325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.371339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.374379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.374450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.374464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.377515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.377618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.377643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.380662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.380756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.380776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.383815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.383874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.383888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.386947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.387010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.387024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.390076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.390147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.390161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.393215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.393289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.393303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.396354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.396434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.396454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.399499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.399638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.399652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.402752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.402829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.402843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.405895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.405957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.405971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.409054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.409117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.409131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.412188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.412251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.412265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.415363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.415423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.415437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.418534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.418668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.418682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.421741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.421814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.421829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.424856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.424926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.055 [2024-11-26 20:39:41.424940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.055 [2024-11-26 20:39:41.427973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.055 [2024-11-26 20:39:41.428042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.428057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.431140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.431214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.431228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.434279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.434390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.434404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.437499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.437573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.437599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.440656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.440747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.443802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.443873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.443888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.446965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.447028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.447042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.450076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.450198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.450213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.453291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.453381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.453401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.456404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.456475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.456488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.459560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.459644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.459659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.462723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.462799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.462813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.465833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.465908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.465922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.468975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.469080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.469094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.472191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.472261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.472275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.475317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.475407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.475421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.478462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.478538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.478553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.481637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.481708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.481722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.484770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.484851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.484865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.487895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.487999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.488013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.491145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.491208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.491223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.494292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.494352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.494366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.497404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.497475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.497489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.500545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.500636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.500650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.503713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.503784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.503798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.506923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.507068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.507085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.509851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.510079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.056 [2024-11-26 20:39:41.510109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.056 [2024-11-26 20:39:41.513085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.056 [2024-11-26 20:39:41.513313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.513338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.516271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.516499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.516520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.519480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.519719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.519748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.522690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.522914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.522942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.525841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.526088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.526115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.529024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.529251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.529379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.532288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.532540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.535598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.535825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.535852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.538767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.538995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.539022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.541941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.542182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.542209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.545127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.545422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.545444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.548354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.548582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.548619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.551546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.551787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.551813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.554723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.554950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.555004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.557910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.558160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.558214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.561137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.561365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.561385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.564316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.564623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.564644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.567581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.567822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.567848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.570775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.570999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.571026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.573936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.574184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.574212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.577109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.577335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.577387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.580296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.580523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.580561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.583502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.583745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.583790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.057 [2024-11-26 20:39:41.586718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.057 [2024-11-26 20:39:41.586947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.057 [2024-11-26 20:39:41.586967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.058 [2024-11-26 20:39:41.589873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.058 [2024-11-26 20:39:41.590112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.058 [2024-11-26 20:39:41.590132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.058 [2024-11-26 20:39:41.593055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.058 [2024-11-26 20:39:41.593282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.058 [2024-11-26 20:39:41.593302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.058 [2024-11-26 20:39:41.596205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.058 [2024-11-26 20:39:41.596502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.058 [2024-11-26 20:39:41.596526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.058 [2024-11-26 20:39:41.599463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.058 [2024-11-26 20:39:41.599703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.058 [2024-11-26 20:39:41.599723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.058 [2024-11-26 20:39:41.602641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.058 [2024-11-26 20:39:41.602868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.058 [2024-11-26 20:39:41.602895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.605810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.606049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.606069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.608985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.609215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.609242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.612194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.612425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.612445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.615359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.615665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.615687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.618580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.618818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.618838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.621769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.622022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.622044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.624940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.625169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.625189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.628116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.628344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.628363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.631265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.631494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.631515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.634438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.634741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.634763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.637680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.637905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.637955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.640868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.641093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.641120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.644007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.644237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.644257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.647180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.647406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.647427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.650334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.650639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.650660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.653561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.653801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.653820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.656745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.656969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.320 [2024-11-26 20:39:41.656989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.320 [2024-11-26 20:39:41.659926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.320 [2024-11-26 20:39:41.660157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.660268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.663203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.663431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.663451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.666380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.666621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.666640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.669527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.669825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.669848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.672780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.673008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.673028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.675950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.676181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.676239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.679168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.679399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.679495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.682434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.682677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.682696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.685584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.685818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.685840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.688755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.688984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.689004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.691917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.692207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.695167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.695395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.695415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.698337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.698565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.698606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.701495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.701740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.701766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.704699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.704923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.704949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.707890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.708117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.708143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.711071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.711297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.711399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.714312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.714621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.714642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.717513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.717803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.720732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.720960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.721050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.723965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.724192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.724312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.727265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.727488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.727644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.730570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.730802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.730896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.733798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.734035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.734171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.737100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.737329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.737422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.740341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.740572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.740681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.743614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.743840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.743925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.321 [2024-11-26 20:39:41.746834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.321 [2024-11-26 20:39:41.747061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.321 [2024-11-26 20:39:41.747083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.749976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.750282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.750305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.753237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.753468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.753489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.756406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.756647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.756666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.759579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.759822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.759848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.762760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.762989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.763009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.765939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.766180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.766200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.769128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.769422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.769445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.772364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.772607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.772631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.775557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.775796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.775819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.778738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.778963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.778983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.781908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.782145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.782167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.785090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.785318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.785338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.788260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.788561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.788583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.791566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.791800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.791822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.794748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.794975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.794995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.797935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.798173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.798193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.801093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.801319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.801339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.804277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.804575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.804606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.807551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.807789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.807804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.810700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.810924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.810943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.813897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.814143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.814170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.817092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.817322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.817342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.820269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.820566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.820600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.823526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.823762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.823784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.826701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.826926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.827051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.830057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.830287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.830307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.833264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.322 [2024-11-26 20:39:41.833491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.322 [2024-11-26 20:39:41.833512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.322 [2024-11-26 20:39:41.836449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.836759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.836781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.839744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.839975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.840001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.842927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.843155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.843175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.846112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.846339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.846360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.849263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.849492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.849512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.852444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.852684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.852710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.855608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.855834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.855854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.858774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.859000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.859019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.861945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.862190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.862209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.865125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.865350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.865370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.868289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.868516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.868536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.323 [2024-11-26 20:39:41.871455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.323 [2024-11-26 20:39:41.871694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.323 [2024-11-26 20:39:41.871713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.585 [2024-11-26 20:39:41.874664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.585 [2024-11-26 20:39:41.874890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.585 [2024-11-26 20:39:41.874916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.585 [2024-11-26 20:39:41.877821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.585 [2024-11-26 20:39:41.878043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.585 [2024-11-26 20:39:41.878063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.585 [2024-11-26 20:39:41.880750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.585 [2024-11-26 20:39:41.880795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.585 [2024-11-26 20:39:41.880809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.585 [2024-11-26 20:39:41.883906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.585 [2024-11-26 20:39:41.883951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.585 [2024-11-26 20:39:41.883965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.887055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.887102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.887115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.890181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.890222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.890235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.893325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.893437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.893450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.896538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.896579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.896606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.899685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.899725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.899739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.902821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.902862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.902876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.905949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.906000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.906014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.909100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.909144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.909157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.912282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.912393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.912407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.915501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.915541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.915555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.918686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.918730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.918744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.921824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.921870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.921883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.924980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.925019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.925032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.928114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.928156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.928170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.931273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.931381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.931394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.934500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.934548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.934563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.937661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.937700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.937714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.940806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.940845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.940859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.943968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.944010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.944024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.947100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.947143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.947156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.950258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.950366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.950380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.953481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.953519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.953533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.956648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.956687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.956701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.586 [2024-11-26 20:39:41.959797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.586 [2024-11-26 20:39:41.959843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.586 [2024-11-26 20:39:41.959856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.962941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.962982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.962995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.966106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.966144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.966158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.969253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.969360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.969373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.972484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.972524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.972538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.975630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.975670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.975684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.978775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.978816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.978829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.981921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.981965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.981995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.985126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.985171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.985185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.988268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.988374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.988388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.991492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.991538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.991557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.994692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.994737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.994751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:41.997853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:41.997897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:41.997910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.000968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.001012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.001026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.004120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.004160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.004174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.007268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.007382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.007396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.010483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.010530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.010543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.013648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.013692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.013706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.016780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.016820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.016835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.019938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.019981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.019994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.023090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.023136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.023149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.026259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.026378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.026392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.029468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.029512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.029525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.032671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.032712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.032725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.035804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.587 [2024-11-26 20:39:42.035848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.587 [2024-11-26 20:39:42.035863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.587 [2024-11-26 20:39:42.038977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.039023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.039037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.042143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.042189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.042203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.045271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.045381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.045394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.048475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.048520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.048534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.051658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.051699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.051713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.054839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.054883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.054897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.057997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.058038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.058052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.061111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.061152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.061165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.064264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.064309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.064322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.067430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.067471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.067485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.070573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.070697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.070711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.073816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.073854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.073868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.076946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.076992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.077006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.080114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.080157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.080170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.083255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.083301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.083314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.086394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.086440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.086454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.089527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.089657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.089672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.092779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.092823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.092836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.095918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.095964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.095978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.099062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.099108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.099122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.102216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.102260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.102274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.105354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.105399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.105413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.108501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.108627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.111745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.111790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.111804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.114903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.114943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.588 [2024-11-26 20:39:42.114957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.588 [2024-11-26 20:39:42.118041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.588 [2024-11-26 20:39:42.118086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.589 [2024-11-26 20:39:42.118100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.589 [2024-11-26 20:39:42.121158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.589 [2024-11-26 20:39:42.121202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.589 [2024-11-26 20:39:42.121216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.589 [2024-11-26 20:39:42.124274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.589 [2024-11-26 20:39:42.124319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.589 [2024-11-26 20:39:42.124332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.589 [2024-11-26 20:39:42.127448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.589 [2024-11-26 20:39:42.127562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.589 [2024-11-26 20:39:42.127575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.589 [2024-11-26 20:39:42.130692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.589 [2024-11-26 20:39:42.130735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.589 [2024-11-26 20:39:42.130749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.589 [2024-11-26 20:39:42.133843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.589 [2024-11-26 20:39:42.133887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.589 [2024-11-26 20:39:42.133900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.136996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.137037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.137051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.140144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.140188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.140201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.848 9697.00 IOPS, 1212.12 MiB/s [2024-11-26T20:39:42.403Z] [2024-11-26 20:39:42.144584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.144683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.144696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.147774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.147815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.147828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.150949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.150994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.151009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.154110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.154151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.154165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.157266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.157307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.157321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.160404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.160522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.160536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.163680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.163721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.163735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.166811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.166857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.166871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.169954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.170007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.848 [2024-11-26 20:39:42.170021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.848 [2024-11-26 20:39:42.173107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.848 [2024-11-26 20:39:42.173151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.173164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.176254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.176299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.176313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.179422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.179541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.179555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.182663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.182701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.182715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.185788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.185832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.185846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.188941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.188988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.189002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.192109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.192154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.192168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.195243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.195357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.195371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.198490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.198533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.198547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.201641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.201681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.201694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.204802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.204842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.204856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.207927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.207968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.207981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.211085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.211129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.211143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.214253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.214372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.214386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.217474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.217534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.217548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.220577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.220638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.220651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.223752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.223799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.223813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.226869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.226909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.226923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.230017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.230057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.230071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.233156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.233271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.233284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.236404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.236456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.236469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.239651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.239764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.239865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.242882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.242993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.243079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.246129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.246241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.246340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.249336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.249445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.249542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.252564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.252687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.252782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.255767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.255884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.255983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.849 [2024-11-26 20:39:42.258961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.849 [2024-11-26 20:39:42.259080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.849 [2024-11-26 20:39:42.259179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.262172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.262284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.262373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.265372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.265484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.265584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.268605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.268712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.268800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.271796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.271910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.271994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.275023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.275143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.275234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.278252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.278363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.278462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.281444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.281549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.281665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.284691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.284807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.284911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.287885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.287984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.287998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.291145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.291253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.291353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.294372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.294483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.294603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.297568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.297690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.297784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.300805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.300913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.300995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.304010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.304126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.304227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.307227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.307341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.307425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.310426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.310532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.310620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.313641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.313690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.313704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.316797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.316845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.316859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.319906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.319947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.319960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.323060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.323166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.323180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.326266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.326313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.326327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.329422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.329465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.329479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.332570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.332626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.332641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.335732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.335777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.335791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.338839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.338891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.338905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.341974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.342095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.342109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.345196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.850 [2024-11-26 20:39:42.345242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.850 [2024-11-26 20:39:42.345256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.850 [2024-11-26 20:39:42.348348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.348388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.348401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.351475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.351526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.351540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.354660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.354700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.354714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.357776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.357817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.357831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.360925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.361043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.361057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.364161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.364204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.364218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.367321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.367364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.367378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.370475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.370527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.370541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.373668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.373715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.373728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.376810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.376858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.376872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.379972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.380078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.380092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.383210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.383256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.383269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.386368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.386409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.386422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.389516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.389563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.389576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.392699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.392750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.392763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.395863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.395903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.395917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.851 [2024-11-26 20:39:42.399003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:27.851 [2024-11-26 20:39:42.399115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.851 [2024-11-26 20:39:42.399128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.402240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.402284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.402298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.405391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.405432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.405446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.408524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.408570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.408585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.411642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.411684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.411698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.414769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.414818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.414833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.417927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.418048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.418062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.421126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.421171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.421186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.424265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.424306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.424321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.427429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.427473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.427487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.430546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.430604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.430618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.433679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.433726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.433740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.436809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.436853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.436867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.439970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.440015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.440029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.443108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.443157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.443171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.446258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.446301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.446314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.449347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.449398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.449412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.452535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.452665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.452679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.455849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.455970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.456099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.459059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.459177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.459278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.462278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.462388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.462483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.465495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.465612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.465743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.468731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.468833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.468847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.110 [2024-11-26 20:39:42.471931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.110 [2024-11-26 20:39:42.472042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.110 [2024-11-26 20:39:42.472055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.475148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.475193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.475207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.478320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.478366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.478380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.481458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.481505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.481519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.484639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.484680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.484694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.487754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.487804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.487817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.490907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.491017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.491031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.494133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.494179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.494193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.497254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.497303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.497317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.500365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.500412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.500426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.503529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.503573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.503600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.506708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.506751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.506765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.509828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.509870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.509884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.512967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.513082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.516208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.516250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.516264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.519364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.519408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.519422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.522567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.522621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.522635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.525736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.525782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.525796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.528870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.528920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.528934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.532023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.532143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.532157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.535277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.535329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.535342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.538423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.538465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.538479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.541565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.541621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.541635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.544695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.544742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.544756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.547841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.547888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.547902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.550991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.551110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.551124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.554217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.554264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.554278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.111 [2024-11-26 20:39:42.557358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.111 [2024-11-26 20:39:42.557401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.111 [2024-11-26 20:39:42.557415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.560605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.560718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.560836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.563849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.563891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.563905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.566986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.567037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.567050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.570135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.570249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.570262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.573367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.573413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.573427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.576605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.576718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.576829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.579826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.579938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.580088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.583073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.583191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.583283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.586287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.586408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.586543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.589498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.589615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.589733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.592719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.592839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.592945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.595967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.596088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.596194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.599188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.599311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.599400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.602402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.602516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.602621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.605640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.605682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.605697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.608784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.608831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.608844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.611939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.611981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.611995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.615095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.615140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.615154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.618265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.618304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.618318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.621388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.621432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.621446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.624561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.624683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.624697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.627806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.627847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.627861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.630963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.631005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.631019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.634112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.634156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.634170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.637211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.637258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.637272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.112 [2024-11-26 20:39:42.640357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.112 [2024-11-26 20:39:42.640402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.112 [2024-11-26 20:39:42.640416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.113 [2024-11-26 20:39:42.643517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.113 [2024-11-26 20:39:42.643651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.113 [2024-11-26 20:39:42.643665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.113 [2024-11-26 20:39:42.646735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.113 [2024-11-26 20:39:42.646781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.113 [2024-11-26 20:39:42.646795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.113 [2024-11-26 20:39:42.649880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.113 [2024-11-26 20:39:42.649927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.113 [2024-11-26 20:39:42.649941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.113 [2024-11-26 20:39:42.653032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.113 [2024-11-26 20:39:42.653081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.113 [2024-11-26 20:39:42.653094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.113 [2024-11-26 20:39:42.656196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.113 [2024-11-26 20:39:42.656242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.113 [2024-11-26 20:39:42.656256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.113 [2024-11-26 20:39:42.659353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.113 [2024-11-26 20:39:42.659395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.113 [2024-11-26 20:39:42.659409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.662518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.662641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.662656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.665740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.665784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.665798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.668897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.668946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.668960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.671985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.672033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.672047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.675111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.675154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.675168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.678237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.678280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.678294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.681317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.681429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.681443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.684551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.684604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.684618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.687699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.687744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.687757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.690851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.690899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.690913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.694017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.694059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.694073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.697154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.697201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.697215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.700313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.700428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.700442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.703530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.703578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.703605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.706679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.706722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.706736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.709799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.709847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.709861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.712944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.712993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.713007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.716091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.716206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.716220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.719317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.719362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.719375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.722447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.722495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.722509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.725609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.725649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.725663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.728756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.728805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.728819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.731906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.731948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.731961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.735047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.735169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.735183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.738292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.373 [2024-11-26 20:39:42.738339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.373 [2024-11-26 20:39:42.738352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.373 [2024-11-26 20:39:42.741416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.741459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.741473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.744558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.744613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.744627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.747735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.747780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.747793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.750850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.750896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.750910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.753989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.754096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.754109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.757194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.757237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.757251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.760293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.760338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.760352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.763469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.763512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.763526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.766642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.766685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.766698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.769765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.769807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.769821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.772900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.773019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.773032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.776145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.776189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.776203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.779241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.779288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.779302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.782393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.782440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.782454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.785536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.785581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.785607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.788699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.788745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.788759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.791865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.791915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.791929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.795026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.795074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.795088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.798154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.798202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.798215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.801296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.801343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.801357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.804443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.804487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.804500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.807598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.807644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.807658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.810739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.810792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.810806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.813862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.813914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.813927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.817008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.817049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.817062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.820143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.820189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.820203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.823309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.823353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.374 [2024-11-26 20:39:42.823367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.374 [2024-11-26 20:39:42.826483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.374 [2024-11-26 20:39:42.826617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.826631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.829687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.829733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.829747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.832846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.832889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.832902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.836005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.836047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.836060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.839151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.839193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.842292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.842337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.842351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.845424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.845536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.845550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.848651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.848698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.848711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.851804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.851854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.851867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.854978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.855028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.855042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.858151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.858196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.858209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.861286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.861332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.861345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.864414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.864542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.864556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.867685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.867727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.867740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.870885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.870927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.870941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.874016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.874062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.874077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.877185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.877232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.877246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.880354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.880396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.880409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.883502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.883640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.883654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.886789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.886831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.886844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.889955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.890009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.890023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.893089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.893134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.893148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.896246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.896289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.896303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.899393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.899436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.899450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.902546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.902671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.902684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.905780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.905828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.905841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.908973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.909017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.909030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.375 [2024-11-26 20:39:42.912124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.375 [2024-11-26 20:39:42.912164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.375 [2024-11-26 20:39:42.912178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.376 [2024-11-26 20:39:42.915287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.376 [2024-11-26 20:39:42.915334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.376 [2024-11-26 20:39:42.915348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.376 [2024-11-26 20:39:42.918432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.376 [2024-11-26 20:39:42.918475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.376 [2024-11-26 20:39:42.918489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.376 [2024-11-26 20:39:42.921561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.376 [2024-11-26 20:39:42.921688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.376 [2024-11-26 20:39:42.921702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.924820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.924862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.924876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.927967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.928017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.928031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.931124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.931176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.931190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.934293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.934339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.934353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.937419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.937460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.937474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.940560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.940686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.940700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.943788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.943837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.943851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.946972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.947014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.947028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.950127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.950172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.950186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.953278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.953327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.953341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.956446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.956489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.956503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.959622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.959663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.959677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.962762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.962804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.962818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.965909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.965951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.965965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.969059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.969110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.969124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.972227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.972269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.972282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.975389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.975434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.636 [2024-11-26 20:39:42.975448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.636 [2024-11-26 20:39:42.978561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.636 [2024-11-26 20:39:42.978711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:42.978725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:42.981790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:42.981833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:42.981847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:42.984963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:42.985005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:42.985018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:42.988102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:42.988152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:42.988166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:42.991284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:42.991325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:42.991339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:42.994414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:42.994466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:42.994480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:42.997574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:42.997697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:42.997711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.000813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.000863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.000876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.003960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.004005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.004019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.007115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.007157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.007171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.010247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.010294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.010308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.013420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.013466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.013480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.016585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.016635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.016649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.019710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.019760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.019774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.022826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.022872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.022886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.026010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.026051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.026065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.029142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.029191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.029205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.032298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.032340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.032354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.035425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.035540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.035554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.038682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.038717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.637 [2024-11-26 20:39:43.038730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.637 [2024-11-26 20:39:43.041808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.637 [2024-11-26 20:39:43.041854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.041868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.044992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.045033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.045047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.048143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.048188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.048201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.051285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.051416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.054500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.054544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.054558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.057648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.057691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.057705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.060786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.060827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.060841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.063940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.063990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.064004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.067089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.067135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.067149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.070236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.070345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.070359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.073461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.073510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.073523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.076645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.076686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.076700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.079795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.079838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.079852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.082919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.082967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.082981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.086087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.086134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.086148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.089195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.089307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.089320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.092403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.092445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.092459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.095527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.095570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.095584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.098707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.098748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.098762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.101829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.101874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.101887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.104965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.638 [2024-11-26 20:39:43.105012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.638 [2024-11-26 20:39:43.105026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.638 [2024-11-26 20:39:43.108132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.108245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.108259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.111380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.111421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.111435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.114603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.114720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.114827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.117817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.117936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.118035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.120995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.121104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.121244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.124235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.124351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.124446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.127453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.127567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.127735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.130704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.130816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.130905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.133896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.134021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.134115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.137111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.137225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.137319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.639 [2024-11-26 20:39:43.140296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.140343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.140358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.639 9730.50 IOPS, 1216.31 MiB/s [2024-11-26T20:39:43.194Z] [2024-11-26 20:39:43.144674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128d5b0) with pdu=0x200016eff3c8 00:16:28.639 [2024-11-26 20:39:43.144736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.639 [2024-11-26 20:39:43.144750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:28.639 00:16:28.639 Latency(us) 00:16:28.639 [2024-11-26T20:39:43.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.639 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:28.639 nvme0n1 : 2.00 9725.98 1215.75 0.00 0.00 1641.25 1121.67 9023.80 00:16:28.639 [2024-11-26T20:39:43.194Z] =================================================================================================================== 00:16:28.639 [2024-11-26T20:39:43.194Z] Total : 9725.98 1215.75 0.00 0.00 1641.25 1121.67 9023.80 00:16:28.639 { 00:16:28.639 "results": [ 00:16:28.639 { 00:16:28.639 "job": "nvme0n1", 00:16:28.639 "core_mask": "0x2", 00:16:28.639 "workload": "randwrite", 00:16:28.639 "status": "finished", 00:16:28.639 "queue_depth": 16, 00:16:28.639 "io_size": 131072, 00:16:28.639 "runtime": 2.003295, 00:16:28.639 "iops": 9725.976453792377, 00:16:28.639 "mibps": 1215.747056724047, 00:16:28.639 "io_failed": 0, 00:16:28.639 "io_timeout": 0, 00:16:28.639 "avg_latency_us": 1641.2545154209372, 00:16:28.639 "min_latency_us": 1121.6738461538462, 00:16:28.639 "max_latency_us": 9023.803076923077 00:16:28.639 } 00:16:28.639 ], 00:16:28.639 "core_count": 1 00:16:28.639 } 00:16:28.639 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:28.639 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:28.639 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:28.639 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:28.639 | .driver_specific 00:16:28.639 | .nvme_error 00:16:28.639 | .status_code 00:16:28.639 | .command_transient_transport_error' 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 629 > 0 )) 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79477 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79477 ']' 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79477 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79477 00:16:28.897 killing process with pid 79477 00:16:28.897 Received shutdown signal, test time was about 2.000000 seconds 00:16:28.897 00:16:28.897 Latency(us) 00:16:28.897 [2024-11-26T20:39:43.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.897 [2024-11-26T20:39:43.452Z] =================================================================================================================== 00:16:28.897 [2024-11-26T20:39:43.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79477' 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79477 00:16:28.897 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79477 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79280 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79280 ']' 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79280 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79280 00:16:29.154 killing process with pid 79280 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79280' 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79280 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79280 00:16:29.154 00:16:29.154 real 0m16.773s 00:16:29.154 user 0m32.758s 00:16:29.154 sys 0m3.503s 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.154 ************************************ 00:16:29.154 END TEST nvmf_digest_error 00:16:29.154 ************************************ 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:29.154 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.411 rmmod nvme_tcp 00:16:29.411 rmmod nvme_fabrics 00:16:29.411 rmmod nvme_keyring 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79280 ']' 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79280 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79280 ']' 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79280 00:16:29.411 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79280) - No such process 00:16:29.411 Process with pid 79280 is not found 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79280 is not found' 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:29.411 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:29.669 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.669 20:39:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:16:29.669 00:16:29.669 real 0m34.722s 00:16:29.669 user 1m6.104s 00:16:29.669 sys 0m7.539s 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:29.669 ************************************ 00:16:29.669 END TEST nvmf_digest 00:16:29.669 ************************************ 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.669 ************************************ 00:16:29.669 START TEST nvmf_host_multipath 00:16:29.669 ************************************ 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:29.669 * Looking for test storage... 00:16:29.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:29.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.669 --rc genhtml_branch_coverage=1 00:16:29.669 --rc genhtml_function_coverage=1 00:16:29.669 --rc genhtml_legend=1 00:16:29.669 --rc geninfo_all_blocks=1 00:16:29.669 --rc geninfo_unexecuted_blocks=1 00:16:29.669 00:16:29.669 ' 00:16:29.669 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:29.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.669 --rc genhtml_branch_coverage=1 00:16:29.670 --rc genhtml_function_coverage=1 00:16:29.670 --rc genhtml_legend=1 00:16:29.670 --rc geninfo_all_blocks=1 00:16:29.670 --rc geninfo_unexecuted_blocks=1 00:16:29.670 00:16:29.670 ' 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:29.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.670 --rc genhtml_branch_coverage=1 00:16:29.670 --rc genhtml_function_coverage=1 00:16:29.670 --rc genhtml_legend=1 00:16:29.670 --rc geninfo_all_blocks=1 00:16:29.670 --rc geninfo_unexecuted_blocks=1 00:16:29.670 00:16:29.670 ' 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:29.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.670 --rc genhtml_branch_coverage=1 00:16:29.670 --rc genhtml_function_coverage=1 00:16:29.670 --rc genhtml_legend=1 00:16:29.670 --rc geninfo_all_blocks=1 00:16:29.670 --rc geninfo_unexecuted_blocks=1 00:16:29.670 00:16:29.670 ' 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.670 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:29.938 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:29.938 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:29.939 Cannot find device "nvmf_init_br" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:29.939 Cannot find device "nvmf_init_br2" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:29.939 Cannot find device "nvmf_tgt_br" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.939 Cannot find device "nvmf_tgt_br2" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:29.939 Cannot find device "nvmf_init_br" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:29.939 Cannot find device "nvmf_init_br2" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:29.939 Cannot find device "nvmf_tgt_br" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:29.939 Cannot find device "nvmf_tgt_br2" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:29.939 Cannot find device "nvmf_br" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:29.939 Cannot find device "nvmf_init_if" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:29.939 Cannot find device "nvmf_init_if2" 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:29.939 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:30.200 00:16:30.200 --- 10.0.0.3 ping statistics --- 00:16:30.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.200 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.200 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.200 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.027 ms 00:16:30.200 00:16:30.200 --- 10.0.0.4 ping statistics --- 00:16:30.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.200 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:30.200 00:16:30.200 --- 10.0.0.1 ping statistics --- 00:16:30.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.200 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:30.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:16:30.200 00:16:30.200 --- 10.0.0.2 ping statistics --- 00:16:30.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.200 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=79793 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 79793 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 79793 ']' 00:16:30.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.200 20:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:30.200 [2024-11-26 20:39:44.566863] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:16:30.201 [2024-11-26 20:39:44.566927] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.201 [2024-11-26 20:39:44.707570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.201 [2024-11-26 20:39:44.744582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.201 [2024-11-26 20:39:44.744637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.201 [2024-11-26 20:39:44.744645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.201 [2024-11-26 20:39:44.744650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.201 [2024-11-26 20:39:44.744654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.201 [2024-11-26 20:39:44.745635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.201 [2024-11-26 20:39:44.745716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.460 [2024-11-26 20:39:44.778775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79793 00:16:31.025 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:31.283 [2024-11-26 20:39:45.755233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.283 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:31.540 Malloc0 00:16:31.540 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:31.798 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:32.057 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:32.315 [2024-11-26 20:39:46.612415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:32.315 [2024-11-26 20:39:46.828504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:32.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=79843 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 79843 /var/tmp/bdevperf.sock 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 79843 ']' 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.315 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:32.883 20:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.883 20:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:32.883 20:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:32.883 20:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:33.141 Nvme0n1 00:16:33.141 20:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:33.399 Nvme0n1 00:16:33.399 20:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:16:33.399 20:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:34.772 20:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:34.772 20:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:34.772 20:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:35.030 20:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:35.030 20:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79881 00:16:35.030 20:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:35.030 20:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:41.592 Attaching 4 probes... 00:16:41.592 @path[10.0.0.3, 4421]: 25396 00:16:41.592 @path[10.0.0.3, 4421]: 26001 00:16:41.592 @path[10.0.0.3, 4421]: 26106 00:16:41.592 @path[10.0.0.3, 4421]: 26264 00:16:41.592 @path[10.0.0.3, 4421]: 26347 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79881 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79994 00:16:41.592 20:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:48.151 Attaching 4 probes... 00:16:48.151 @path[10.0.0.3, 4420]: 25319 00:16:48.151 @path[10.0.0.3, 4420]: 25657 00:16:48.151 @path[10.0.0.3, 4420]: 19639 00:16:48.151 @path[10.0.0.3, 4420]: 20197 00:16:48.151 @path[10.0.0.3, 4420]: 20135 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79994 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80112 00:16:48.151 20:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.711 Attaching 4 probes... 00:16:54.711 @path[10.0.0.3, 4421]: 16434 00:16:54.711 @path[10.0.0.3, 4421]: 25436 00:16:54.711 @path[10.0.0.3, 4421]: 25831 00:16:54.711 @path[10.0.0.3, 4421]: 26025 00:16:54.711 @path[10.0.0.3, 4421]: 25530 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80112 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:16:54.711 20:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:54.711 20:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:54.971 20:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:16:54.971 20:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80224 00:16:54.971 20:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:54.971 20:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:01.673 Attaching 4 probes... 00:17:01.673 00:17:01.673 00:17:01.673 00:17:01.673 00:17:01.673 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80224 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80342 00:17:01.673 20:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:08.231 20:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:08.231 20:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.231 Attaching 4 probes... 00:17:08.231 @path[10.0.0.3, 4421]: 25583 00:17:08.231 @path[10.0.0.3, 4421]: 25957 00:17:08.231 @path[10.0.0.3, 4421]: 25958 00:17:08.231 @path[10.0.0.3, 4421]: 25900 00:17:08.231 @path[10.0.0.3, 4421]: 25897 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80342 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:08.231 [2024-11-26 20:40:22.309969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ea8b0 is same with the state(6) to be set 00:17:08.231 20:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:08.796 20:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:08.796 20:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80467 00:17:08.796 20:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:08.796 20:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.459 Attaching 4 probes... 00:17:15.459 @path[10.0.0.3, 4420]: 23992 00:17:15.459 @path[10.0.0.3, 4420]: 24660 00:17:15.459 @path[10.0.0.3, 4420]: 24512 00:17:15.459 @path[10.0.0.3, 4420]: 24416 00:17:15.459 @path[10.0.0.3, 4420]: 24444 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80467 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:15.459 [2024-11-26 20:40:29.738665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:15.459 20:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:17:22.019 20:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:22.019 20:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80647 00:17:22.019 20:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:22.019 20:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:28.593 20:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.593 20:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.593 Attaching 4 probes... 00:17:28.593 @path[10.0.0.3, 4421]: 25258 00:17:28.593 @path[10.0.0.3, 4421]: 25742 00:17:28.593 @path[10.0.0.3, 4421]: 25721 00:17:28.593 @path[10.0.0.3, 4421]: 25736 00:17:28.593 @path[10.0.0.3, 4421]: 25754 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.593 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80647 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 79843 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 79843 ']' 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 79843 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79843 00:17:28.594 killing process with pid 79843 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79843' 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 79843 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 79843 00:17:28.594 { 00:17:28.594 "results": [ 00:17:28.594 { 00:17:28.594 "job": "Nvme0n1", 00:17:28.594 "core_mask": "0x4", 00:17:28.594 "workload": "verify", 00:17:28.594 "status": "terminated", 00:17:28.594 "verify_range": { 00:17:28.594 "start": 0, 00:17:28.594 "length": 16384 00:17:28.594 }, 00:17:28.594 "queue_depth": 128, 00:17:28.594 "io_size": 4096, 00:17:28.594 "runtime": 54.257499, 00:17:28.594 "iops": 10600.359592689667, 00:17:28.594 "mibps": 41.40765465894401, 00:17:28.594 "io_failed": 0, 00:17:28.594 "io_timeout": 0, 00:17:28.594 "avg_latency_us": 12050.800870998379, 00:17:28.594 "min_latency_us": 1001.9446153846154, 00:17:28.594 "max_latency_us": 7020619.618461538 00:17:28.594 } 00:17:28.594 ], 00:17:28.594 "core_count": 1 00:17:28.594 } 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 79843 00:17:28.594 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.594 [2024-11-26 20:39:46.881741] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:17:28.594 [2024-11-26 20:39:46.881817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79843 ] 00:17:28.594 [2024-11-26 20:39:47.022695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.594 [2024-11-26 20:39:47.061769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.594 [2024-11-26 20:39:47.095410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.594 Running I/O for 90 seconds... 00:17:28.594 7701.00 IOPS, 30.08 MiB/s [2024-11-26T20:40:43.149Z] 10072.00 IOPS, 39.34 MiB/s [2024-11-26T20:40:43.149Z] 11040.00 IOPS, 43.12 MiB/s [2024-11-26T20:40:43.149Z] 11520.00 IOPS, 45.00 MiB/s [2024-11-26T20:40:43.149Z] 11831.20 IOPS, 46.22 MiB/s [2024-11-26T20:40:43.149Z] 12052.00 IOPS, 47.08 MiB/s [2024-11-26T20:40:43.149Z] 12213.71 IOPS, 47.71 MiB/s [2024-11-26T20:40:43.149Z] [2024-11-26 20:39:55.982820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.982873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.982907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.982916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.982930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.982937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.982951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.982958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.982970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.982977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.982990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.982997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.594 [2024-11-26 20:39:55.983354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.594 [2024-11-26 20:39:55.983375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.594 [2024-11-26 20:39:55.983394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.594 [2024-11-26 20:39:55.983407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.983414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.983434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.983453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.983473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.983494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.983513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.983533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.983915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.983938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.983964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.983983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.983996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-11-26 20:39:55.984203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.595 [2024-11-26 20:39:55.984399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.595 [2024-11-26 20:39:55.984412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-11-26 20:39:55.984871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.984986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.984993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.985006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.985013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.985026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.985033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.985046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.985053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.985065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.985072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.985085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.985092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.985104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.985112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.596 [2024-11-26 20:39:55.985124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-11-26 20:39:55.985131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.985144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.985151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.985163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.985170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.985185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.985193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-11-26 20:39:55.986555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.597 [2024-11-26 20:39:55.986714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:39:55.986721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.597 12327.88 IOPS, 48.16 MiB/s [2024-11-26T20:40:43.152Z] 12359.00 IOPS, 48.28 MiB/s [2024-11-26T20:40:43.152Z] 12427.90 IOPS, 48.55 MiB/s [2024-11-26T20:40:43.152Z] 12282.09 IOPS, 47.98 MiB/s [2024-11-26T20:40:43.152Z] 12091.92 IOPS, 47.23 MiB/s [2024-11-26T20:40:43.152Z] 11936.54 IOPS, 46.63 MiB/s [2024-11-26T20:40:43.152Z] 11813.07 IOPS, 46.14 MiB/s [2024-11-26T20:40:43.152Z] [2024-11-26 20:40:02.432373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-11-26 20:40:02.432422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-11-26 20:40:02.432937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.432988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.432996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.433008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.433020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.433032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.433039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.433052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.433059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.433071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.433079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.433091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-11-26 20:40:02.433098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.598 [2024-11-26 20:40:02.433110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-11-26 20:40:02.433740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-11-26 20:40:02.433879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.599 [2024-11-26 20:40:02.433892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.433899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.433923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.433931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.433944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.433951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.433963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.433970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.433989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.433996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-11-26 20:40:02.434792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.434981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.434989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-11-26 20:40:02.435325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.600 [2024-11-26 20:40:02.435342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:02.435574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:02.435767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:02.435775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.601 11366.33 IOPS, 44.40 MiB/s [2024-11-26T20:40:43.156Z] 11146.50 IOPS, 43.54 MiB/s [2024-11-26T20:40:43.156Z] 11238.12 IOPS, 43.90 MiB/s [2024-11-26T20:40:43.156Z] 11330.67 IOPS, 44.26 MiB/s [2024-11-26T20:40:43.156Z] 11419.37 IOPS, 44.61 MiB/s [2024-11-26T20:40:43.156Z] 11487.15 IOPS, 44.87 MiB/s [2024-11-26T20:40:43.156Z] 11561.10 IOPS, 45.16 MiB/s [2024-11-26T20:40:43.156Z] [2024-11-26 20:40:09.268194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-11-26 20:40:09.268674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.601 [2024-11-26 20:40:09.268777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-11-26 20:40:09.268786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.268810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.268840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.268865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.268889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.268914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.268939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.268963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.268979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.268987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.269012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.269036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.269061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.269085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-11-26 20:40:09.269691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.269716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.269740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.602 [2024-11-26 20:40:09.269760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-11-26 20:40:09.269768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.269980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.269997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-11-26 20:40:09.270516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-11-26 20:40:09.270648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.603 [2024-11-26 20:40:09.270663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.270672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.270697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.270725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.270750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.270774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.270798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.270823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.270847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.270872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.270888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.270896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-11-26 20:40:09.271529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.271975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.271997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:09.272321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:09.272330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.604 11174.82 IOPS, 43.65 MiB/s [2024-11-26T20:40:43.159Z] 10688.96 IOPS, 41.75 MiB/s [2024-11-26T20:40:43.159Z] 10243.58 IOPS, 40.01 MiB/s [2024-11-26T20:40:43.159Z] 9833.84 IOPS, 38.41 MiB/s [2024-11-26T20:40:43.159Z] 9455.62 IOPS, 36.94 MiB/s [2024-11-26T20:40:43.159Z] 9105.41 IOPS, 35.57 MiB/s [2024-11-26T20:40:43.159Z] 8780.21 IOPS, 34.30 MiB/s [2024-11-26T20:40:43.159Z] 8812.03 IOPS, 34.42 MiB/s [2024-11-26T20:40:43.159Z] 8951.63 IOPS, 34.97 MiB/s [2024-11-26T20:40:43.159Z] 9081.45 IOPS, 35.47 MiB/s [2024-11-26T20:40:43.159Z] 9203.34 IOPS, 35.95 MiB/s [2024-11-26T20:40:43.159Z] 9317.09 IOPS, 36.39 MiB/s [2024-11-26T20:40:43.159Z] 9423.76 IOPS, 36.81 MiB/s [2024-11-26T20:40:43.159Z] [2024-11-26 20:40:22.310156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:22.310184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.604 [2024-11-26 20:40:22.310216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-11-26 20:40:22.310225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-11-26 20:40:22.310899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.310987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.310994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.311002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-11-26 20:40:22.311009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-11-26 20:40:22.311018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-11-26 20:40:22.311394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.606 [2024-11-26 20:40:22.311630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-11-26 20:40:22.311637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.311653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-11-26 20:40:22.311958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.311976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.311991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.311999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.312006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.312021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.312036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.312055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-11-26 20:40:22.312070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987310 is same with the state(6) to be set 00:17:28.607 [2024-11-26 20:40:22.312087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.607 [2024-11-26 20:40:22.312091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.607 [2024-11-26 20:40:22.312097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101624 len:8 PRP1 0x0 PRP2 0x0 00:17:28.607 [2024-11-26 20:40:22.312104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.607 [2024-11-26 20:40:22.312118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.607 [2024-11-26 20:40:22.312123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102112 len:8 PRP1 0x0 PRP2 0x0 00:17:28.607 [2024-11-26 20:40:22.312130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.607 [2024-11-26 20:40:22.312142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.607 [2024-11-26 20:40:22.312147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102120 len:8 PRP1 0x0 PRP2 0x0 00:17:28.607 [2024-11-26 20:40:22.312154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.607 [2024-11-26 20:40:22.312162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.607 [2024-11-26 20:40:22.312168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.607 [2024-11-26 20:40:22.312174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102128 len:8 PRP1 0x0 PRP2 0x0 00:17:28.607 [2024-11-26 20:40:22.312180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102136 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102144 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102152 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102160 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102168 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102176 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102184 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102192 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102200 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102208 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102216 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102224 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102232 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102240 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.608 [2024-11-26 20:40:22.312530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.608 [2024-11-26 20:40:22.312537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102248 len:8 PRP1 0x0 PRP2 0x0 00:17:28.608 [2024-11-26 20:40:22.312544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.608 [2024-11-26 20:40:22.312655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.608 [2024-11-26 20:40:22.312670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.608 [2024-11-26 20:40:22.312685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.608 [2024-11-26 20:40:22.312699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-11-26 20:40:22.312715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.608 [2024-11-26 20:40:22.312726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f81e0 is same with the state(6) to be set 00:17:28.608 [2024-11-26 20:40:22.313554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:28.608 [2024-11-26 20:40:22.313573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f81e0 (9): Bad file descriptor 00:17:28.608 [2024-11-26 20:40:22.313839] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.608 [2024-11-26 20:40:22.313856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f81e0 with addr=10.0.0.3, port=4421 00:17:28.608 [2024-11-26 20:40:22.313864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f81e0 is same with the state(6) to be set 00:17:28.608 [2024-11-26 20:40:22.313900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f81e0 (9): Bad file descriptor 00:17:28.608 [2024-11-26 20:40:22.313916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:28.608 [2024-11-26 20:40:22.313924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:28.608 [2024-11-26 20:40:22.313931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:28.608 [2024-11-26 20:40:22.313938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:28.608 [2024-11-26 20:40:22.313945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:28.608 9515.69 IOPS, 37.17 MiB/s [2024-11-26T20:40:43.163Z] 9590.03 IOPS, 37.46 MiB/s [2024-11-26T20:40:43.163Z] 9661.81 IOPS, 37.74 MiB/s [2024-11-26T20:40:43.163Z] 9731.61 IOPS, 38.01 MiB/s [2024-11-26T20:40:43.163Z] 9796.03 IOPS, 38.27 MiB/s [2024-11-26T20:40:43.163Z] 9856.62 IOPS, 38.50 MiB/s [2024-11-26T20:40:43.163Z] 9915.15 IOPS, 38.73 MiB/s [2024-11-26T20:40:43.163Z] 9970.57 IOPS, 38.95 MiB/s [2024-11-26T20:40:43.163Z] 10023.86 IOPS, 39.16 MiB/s [2024-11-26T20:40:43.163Z] 10076.64 IOPS, 39.36 MiB/s [2024-11-26T20:40:43.163Z] [2024-11-26 20:40:32.370245] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:28.608 10135.40 IOPS, 39.59 MiB/s [2024-11-26T20:40:43.163Z] 10196.11 IOPS, 39.83 MiB/s [2024-11-26T20:40:43.163Z] 10254.23 IOPS, 40.06 MiB/s [2024-11-26T20:40:43.163Z] 10309.94 IOPS, 40.27 MiB/s [2024-11-26T20:40:43.163Z] 10358.31 IOPS, 40.46 MiB/s [2024-11-26T20:40:43.164Z] 10408.58 IOPS, 40.66 MiB/s [2024-11-26T20:40:43.164Z] 10456.73 IOPS, 40.85 MiB/s [2024-11-26T20:40:43.164Z] 10503.33 IOPS, 41.03 MiB/s [2024-11-26T20:40:43.164Z] 10548.17 IOPS, 41.20 MiB/s [2024-11-26T20:40:43.164Z] 10591.65 IOPS, 41.37 MiB/s [2024-11-26T20:40:43.164Z] Received shutdown signal, test time was about 54.258154 seconds 00:17:28.609 00:17:28.609 Latency(us) 00:17:28.609 [2024-11-26T20:40:43.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.609 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.609 Verification LBA range: start 0x0 length 0x4000 00:17:28.609 Nvme0n1 : 54.26 10600.36 41.41 0.00 0.00 12050.80 1001.94 7020619.62 00:17:28.609 [2024-11-26T20:40:43.164Z] =================================================================================================================== 00:17:28.609 [2024-11-26T20:40:43.164Z] Total : 10600.36 41.41 0.00 0.00 12050.80 1001.94 7020619.62 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.609 rmmod nvme_tcp 00:17:28.609 rmmod nvme_fabrics 00:17:28.609 rmmod nvme_keyring 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 79793 ']' 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 79793 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 79793 ']' 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 79793 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79793 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79793' 00:17:28.609 killing process with pid 79793 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 79793 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 79793 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.609 20:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:17:28.609 00:17:28.609 real 0m58.975s 00:17:28.609 user 2m45.900s 00:17:28.609 sys 0m13.924s 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:28.609 ************************************ 00:17:28.609 END TEST nvmf_host_multipath 00:17:28.609 ************************************ 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.609 ************************************ 00:17:28.609 START TEST nvmf_timeout 00:17:28.609 ************************************ 00:17:28.609 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:28.874 * Looking for test storage... 00:17:28.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.874 --rc genhtml_branch_coverage=1 00:17:28.874 --rc genhtml_function_coverage=1 00:17:28.874 --rc genhtml_legend=1 00:17:28.874 --rc geninfo_all_blocks=1 00:17:28.874 --rc geninfo_unexecuted_blocks=1 00:17:28.874 00:17:28.874 ' 00:17:28.874 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.874 --rc genhtml_branch_coverage=1 00:17:28.874 --rc genhtml_function_coverage=1 00:17:28.874 --rc genhtml_legend=1 00:17:28.874 --rc geninfo_all_blocks=1 00:17:28.874 --rc geninfo_unexecuted_blocks=1 00:17:28.874 00:17:28.875 ' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.875 --rc genhtml_branch_coverage=1 00:17:28.875 --rc genhtml_function_coverage=1 00:17:28.875 --rc genhtml_legend=1 00:17:28.875 --rc geninfo_all_blocks=1 00:17:28.875 --rc geninfo_unexecuted_blocks=1 00:17:28.875 00:17:28.875 ' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.875 --rc genhtml_branch_coverage=1 00:17:28.875 --rc genhtml_function_coverage=1 00:17:28.875 --rc genhtml_legend=1 00:17:28.875 --rc geninfo_all_blocks=1 00:17:28.875 --rc geninfo_unexecuted_blocks=1 00:17:28.875 00:17:28.875 ' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.875 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:28.875 Cannot find device "nvmf_init_br" 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:28.875 Cannot find device "nvmf_init_br2" 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:28.875 Cannot find device "nvmf_tgt_br" 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.875 Cannot find device "nvmf_tgt_br2" 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:28.875 Cannot find device "nvmf_init_br" 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:17:28.875 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:28.875 Cannot find device "nvmf_init_br2" 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:28.876 Cannot find device "nvmf_tgt_br" 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:28.876 Cannot find device "nvmf_tgt_br2" 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:28.876 Cannot find device "nvmf_br" 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:28.876 Cannot find device "nvmf_init_if" 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:28.876 Cannot find device "nvmf_init_if2" 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:28.876 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:29.135 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.135 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:29.135 00:17:29.135 --- 10.0.0.3 ping statistics --- 00:17:29.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.135 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:29.135 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:29.135 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:17:29.135 00:17:29.135 --- 10.0.0.4 ping statistics --- 00:17:29.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.135 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:29.135 00:17:29.135 --- 10.0.0.1 ping statistics --- 00:17:29.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.135 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:29.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:17:29.135 00:17:29.135 --- 10.0.0.2 ping statistics --- 00:17:29.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.135 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81002 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81002 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81002 ']' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.135 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:29.135 [2024-11-26 20:40:43.589853] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:17:29.135 [2024-11-26 20:40:43.589906] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.393 [2024-11-26 20:40:43.727203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.393 [2024-11-26 20:40:43.757670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.393 [2024-11-26 20:40:43.757824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.393 [2024-11-26 20:40:43.757855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.393 [2024-11-26 20:40:43.757889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.393 [2024-11-26 20:40:43.757917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.393 [2024-11-26 20:40:43.758547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.393 [2024-11-26 20:40:43.758554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.393 [2024-11-26 20:40:43.786643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.961 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.220 [2024-11-26 20:40:44.673628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.220 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:30.478 Malloc0 00:17:30.478 20:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:30.736 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:30.993 [2024-11-26 20:40:45.471843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81050 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81050 /var/tmp/bdevperf.sock 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81050 ']' 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.993 20:40:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:30.993 [2024-11-26 20:40:45.523741] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:17:30.993 [2024-11-26 20:40:45.523800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81050 ] 00:17:31.252 [2024-11-26 20:40:45.663761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.252 [2024-11-26 20:40:45.698852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.252 [2024-11-26 20:40:45.728759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:31.848 20:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.848 20:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:31.848 20:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:32.107 20:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:32.364 NVMe0n1 00:17:32.364 20:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81069 00:17:32.364 20:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:17:32.364 20:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:32.621 Running I/O for 10 seconds... 00:17:33.555 20:40:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:33.555 7700.00 IOPS, 30.08 MiB/s [2024-11-26T20:40:48.110Z] [2024-11-26 20:40:48.052350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.555 [2024-11-26 20:40:48.052511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.052573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.555 [2024-11-26 20:40:48.052617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.052669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.555 [2024-11-26 20:40:48.052699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.052727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.555 [2024-11-26 20:40:48.052780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.052809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25e50 is same with the state(6) to be set 00:17:33.555 [2024-11-26 20:40:48.053043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.555 [2024-11-26 20:40:48.053085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.555 [2024-11-26 20:40:48.053922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.555 [2024-11-26 20:40:48.053929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.053935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.053942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.053948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.053956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.053961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.053968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.053974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.053981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.556 [2024-11-26 20:40:48.054536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.556 [2024-11-26 20:40:48.054541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.054992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.054997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.557 [2024-11-26 20:40:48.055100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.557 [2024-11-26 20:40:48.055113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.557 [2024-11-26 20:40:48.055125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.557 [2024-11-26 20:40:48.055138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.557 [2024-11-26 20:40:48.055145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.558 [2024-11-26 20:40:48.055293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.558 [2024-11-26 20:40:48.055308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85970 is same with the state(6) to be set 00:17:33.558 [2024-11-26 20:40:48.055321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:33.558 [2024-11-26 20:40:48.055326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:33.558 [2024-11-26 20:40:48.055331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68760 len:8 PRP1 0x0 PRP2 0x0 00:17:33.558 [2024-11-26 20:40:48.055336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.558 [2024-11-26 20:40:48.055607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:33.558 [2024-11-26 20:40:48.055622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25e50 (9): Bad file descriptor 00:17:33.558 [2024-11-26 20:40:48.055686] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:33.558 [2024-11-26 20:40:48.055697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d25e50 with addr=10.0.0.3, port=4420 00:17:33.558 [2024-11-26 20:40:48.055704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25e50 is same with the state(6) to be set 00:17:33.558 [2024-11-26 20:40:48.055715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25e50 (9): Bad file descriptor 00:17:33.558 [2024-11-26 20:40:48.055725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:33.558 [2024-11-26 20:40:48.055730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:33.558 [2024-11-26 20:40:48.055736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:33.558 [2024-11-26 20:40:48.055743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:33.558 [2024-11-26 20:40:48.055749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:33.558 20:40:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:17:35.425 4234.00 IOPS, 16.54 MiB/s [2024-11-26T20:40:50.238Z] 2822.67 IOPS, 11.03 MiB/s [2024-11-26T20:40:50.238Z] [2024-11-26 20:40:50.056000] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:35.683 [2024-11-26 20:40:50.056045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d25e50 with addr=10.0.0.3, port=4420 00:17:35.683 [2024-11-26 20:40:50.056054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25e50 is same with the state(6) to be set 00:17:35.683 [2024-11-26 20:40:50.056067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25e50 (9): Bad file descriptor 00:17:35.683 [2024-11-26 20:40:50.056077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:35.683 [2024-11-26 20:40:50.056083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:35.683 [2024-11-26 20:40:50.056088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:35.683 [2024-11-26 20:40:50.056094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:35.683 [2024-11-26 20:40:50.056100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:35.683 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:17:35.683 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:35.683 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:35.943 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:35.943 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:17:35.943 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:35.943 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:35.943 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:35.943 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:17:37.440 2117.00 IOPS, 8.27 MiB/s [2024-11-26T20:40:52.252Z] 1693.60 IOPS, 6.62 MiB/s [2024-11-26T20:40:52.252Z] [2024-11-26 20:40:52.056322] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:37.697 [2024-11-26 20:40:52.056370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d25e50 with addr=10.0.0.3, port=4420 00:17:37.697 [2024-11-26 20:40:52.056378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25e50 is same with the state(6) to be set 00:17:37.697 [2024-11-26 20:40:52.056391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25e50 (9): Bad file descriptor 00:17:37.697 [2024-11-26 20:40:52.056401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:37.697 [2024-11-26 20:40:52.056406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:37.697 [2024-11-26 20:40:52.056412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:37.697 [2024-11-26 20:40:52.056417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:37.697 [2024-11-26 20:40:52.056423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:39.562 1411.33 IOPS, 5.51 MiB/s [2024-11-26T20:40:54.117Z] 1209.71 IOPS, 4.73 MiB/s [2024-11-26T20:40:54.117Z] [2024-11-26 20:40:54.056583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:39.562 [2024-11-26 20:40:54.056630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:39.562 [2024-11-26 20:40:54.056636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:39.562 [2024-11-26 20:40:54.056641] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:17:39.562 [2024-11-26 20:40:54.056647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:40.754 1058.50 IOPS, 4.13 MiB/s 00:17:40.754 Latency(us) 00:17:40.754 [2024-11-26T20:40:55.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.754 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:40.754 Verification LBA range: start 0x0 length 0x4000 00:17:40.754 NVMe0n1 : 8.11 1044.67 4.08 15.79 0.00 120474.91 3654.89 7020619.62 00:17:40.754 [2024-11-26T20:40:55.309Z] =================================================================================================================== 00:17:40.754 [2024-11-26T20:40:55.309Z] Total : 1044.67 4.08 15.79 0.00 120474.91 3654.89 7020619.62 00:17:40.754 { 00:17:40.754 "results": [ 00:17:40.754 { 00:17:40.754 "job": "NVMe0n1", 00:17:40.754 "core_mask": "0x4", 00:17:40.754 "workload": "verify", 00:17:40.754 "status": "finished", 00:17:40.754 "verify_range": { 00:17:40.754 "start": 0, 00:17:40.754 "length": 16384 00:17:40.754 }, 00:17:40.754 "queue_depth": 128, 00:17:40.754 "io_size": 4096, 00:17:40.754 "runtime": 8.105928, 00:17:40.754 "iops": 1044.6675568793603, 00:17:40.754 "mibps": 4.080732644060001, 00:17:40.754 "io_failed": 128, 00:17:40.754 "io_timeout": 0, 00:17:40.754 "avg_latency_us": 120474.91330923149, 00:17:40.754 "min_latency_us": 3654.892307692308, 00:17:40.754 "max_latency_us": 7020619.618461538 00:17:40.754 } 00:17:40.754 ], 00:17:40.754 "core_count": 1 00:17:40.754 } 00:17:41.012 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:17:41.012 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:41.012 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:41.269 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:41.269 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:17:41.269 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:41.269 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81069 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81050 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81050 ']' 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81050 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.527 20:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81050 00:17:41.527 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:41.527 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:41.527 killing process with pid 81050 00:17:41.527 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81050' 00:17:41.527 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81050 00:17:41.527 Received shutdown signal, test time was about 9.065870 seconds 00:17:41.527 00:17:41.527 Latency(us) 00:17:41.527 [2024-11-26T20:40:56.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.527 [2024-11-26T20:40:56.082Z] =================================================================================================================== 00:17:41.527 [2024-11-26T20:40:56.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.527 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81050 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:41.784 [2024-11-26 20:40:56.298718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81192 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81192 /var/tmp/bdevperf.sock 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81192 ']' 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.784 20:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:42.042 [2024-11-26 20:40:56.344640] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:17:42.042 [2024-11-26 20:40:56.344699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81192 ] 00:17:42.042 [2024-11-26 20:40:56.479997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.042 [2024-11-26 20:40:56.510500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.042 [2024-11-26 20:40:56.538746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.974 20:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.974 20:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:42.974 20:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:42.974 20:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:43.232 NVMe0n1 00:17:43.232 20:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81210 00:17:43.232 20:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:17:43.232 20:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.232 Running I/O for 10 seconds... 00:17:44.260 20:40:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:44.520 10960.00 IOPS, 42.81 MiB/s [2024-11-26T20:40:59.075Z] [2024-11-26 20:40:58.880987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.520 [2024-11-26 20:40:58.881025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.520 [2024-11-26 20:40:58.881038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.520 [2024-11-26 20:40:58.881044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.520 [2024-11-26 20:40:58.881050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.520 [2024-11-26 20:40:58.881055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.520 [2024-11-26 20:40:58.881061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.520 [2024-11-26 20:40:58.881066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.520 [2024-11-26 20:40:58.881072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.520 [2024-11-26 20:40:58.881077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.520 [2024-11-26 20:40:58.881083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.520 [2024-11-26 20:40:58.881087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.520 [2024-11-26 20:40:58.881093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.520 [2024-11-26 20:40:58.881097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.520 [2024-11-26 20:40:58.881103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.521 [2024-11-26 20:40:58.881323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.521 [2024-11-26 20:40:58.881395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.521 [2024-11-26 20:40:58.881401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.522 [2024-11-26 20:40:58.881546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.522 [2024-11-26 20:40:58.881691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.522 [2024-11-26 20:40:58.881695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.523 [2024-11-26 20:40:58.881943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.523 [2024-11-26 20:40:58.881953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.523 [2024-11-26 20:40:58.881964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.523 [2024-11-26 20:40:58.881974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.523 [2024-11-26 20:40:58.881979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.523 [2024-11-26 20:40:58.881984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.524 [2024-11-26 20:40:58.882202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.524 [2024-11-26 20:40:58.882282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.524 [2024-11-26 20:40:58.882288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.525 [2024-11-26 20:40:58.882292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.525 [2024-11-26 20:40:58.882302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.525 [2024-11-26 20:40:58.882312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.525 [2024-11-26 20:40:58.882323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.525 [2024-11-26 20:40:58.882333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.525 [2024-11-26 20:40:58.882349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.525 [2024-11-26 20:40:58.882359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:44.525 [2024-11-26 20:40:58.882386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:44.525 [2024-11-26 20:40:58.882391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98488 len:8 PRP1 0x0 PRP2 0x0 00:17:44.525 [2024-11-26 20:40:58.882395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.525 [2024-11-26 20:40:58.882488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.525 [2024-11-26 20:40:58.882499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.525 [2024-11-26 20:40:58.882508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.525 [2024-11-26 20:40:58.882517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.525 [2024-11-26 20:40:58.882522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ee50 is same with the state(6) to be set 00:17:44.525 [2024-11-26 20:40:58.882700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:44.525 [2024-11-26 20:40:58.882718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:44.525 [2024-11-26 20:40:58.882776] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:44.525 [2024-11-26 20:40:58.882791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86ee50 with addr=10.0.0.3, port=4420 00:17:44.525 [2024-11-26 20:40:58.882796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ee50 is same with the state(6) to be set 00:17:44.525 [2024-11-26 20:40:58.882804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:44.525 [2024-11-26 20:40:58.882812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:44.525 [2024-11-26 20:40:58.882817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:44.525 [2024-11-26 20:40:58.882822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:44.525 [2024-11-26 20:40:58.882828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:44.525 [2024-11-26 20:40:58.882833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:44.525 20:40:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:17:45.456 6120.00 IOPS, 23.91 MiB/s [2024-11-26T20:41:00.011Z] [2024-11-26 20:40:59.882925] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.456 [2024-11-26 20:40:59.882969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86ee50 with addr=10.0.0.3, port=4420 00:17:45.456 [2024-11-26 20:40:59.882976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ee50 is same with the state(6) to be set 00:17:45.456 [2024-11-26 20:40:59.882987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:45.456 [2024-11-26 20:40:59.882997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:45.456 [2024-11-26 20:40:59.883001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:45.456 [2024-11-26 20:40:59.883007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:45.457 [2024-11-26 20:40:59.883013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:45.457 [2024-11-26 20:40:59.883018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:45.457 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:45.714 [2024-11-26 20:41:00.088631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:45.714 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81210 00:17:46.537 4080.00 IOPS, 15.94 MiB/s [2024-11-26T20:41:01.092Z] [2024-11-26 20:41:00.901470] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:48.406 3060.00 IOPS, 11.95 MiB/s [2024-11-26T20:41:03.894Z] 4820.20 IOPS, 18.83 MiB/s [2024-11-26T20:41:04.829Z] 6225.50 IOPS, 24.32 MiB/s [2024-11-26T20:41:06.197Z] 7223.00 IOPS, 28.21 MiB/s [2024-11-26T20:41:07.130Z] 7977.12 IOPS, 31.16 MiB/s [2024-11-26T20:41:08.080Z] 8556.56 IOPS, 33.42 MiB/s [2024-11-26T20:41:08.080Z] 9019.30 IOPS, 35.23 MiB/s 00:17:53.525 Latency(us) 00:17:53.525 [2024-11-26T20:41:08.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.525 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.525 Verification LBA range: start 0x0 length 0x4000 00:17:53.525 NVMe0n1 : 10.01 9024.39 35.25 0.00 0.00 14167.41 2306.36 3019898.88 00:17:53.525 [2024-11-26T20:41:08.080Z] =================================================================================================================== 00:17:53.525 [2024-11-26T20:41:08.080Z] Total : 9024.39 35.25 0.00 0.00 14167.41 2306.36 3019898.88 00:17:53.525 { 00:17:53.525 "results": [ 00:17:53.525 { 00:17:53.525 "job": "NVMe0n1", 00:17:53.525 "core_mask": "0x4", 00:17:53.525 "workload": "verify", 00:17:53.525 "status": "finished", 00:17:53.525 "verify_range": { 00:17:53.525 "start": 0, 00:17:53.525 "length": 16384 00:17:53.525 }, 00:17:53.525 "queue_depth": 128, 00:17:53.525 "io_size": 4096, 00:17:53.525 "runtime": 10.006775, 00:17:53.525 "iops": 9024.385978499566, 00:17:53.525 "mibps": 35.25150772851393, 00:17:53.525 "io_failed": 0, 00:17:53.525 "io_timeout": 0, 00:17:53.525 "avg_latency_us": 14167.413173646575, 00:17:53.525 "min_latency_us": 2306.3630769230767, 00:17:53.525 "max_latency_us": 3019898.88 00:17:53.525 } 00:17:53.525 ], 00:17:53.525 "core_count": 1 00:17:53.525 } 00:17:53.525 20:41:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81324 00:17:53.525 20:41:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:17:53.525 20:41:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:53.525 Running I/O for 10 seconds... 00:17:54.514 20:41:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:54.514 10132.00 IOPS, 39.58 MiB/s [2024-11-26T20:41:09.069Z] [2024-11-26 20:41:08.994627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.514 [2024-11-26 20:41:08.994659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.514 [2024-11-26 20:41:08.994676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.514 [2024-11-26 20:41:08.994688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.514 [2024-11-26 20:41:08.994699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.514 [2024-11-26 20:41:08.994709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.514 [2024-11-26 20:41:08.994720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.514 [2024-11-26 20:41:08.994730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.514 [2024-11-26 20:41:08.994741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.514 [2024-11-26 20:41:08.994751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.514 [2024-11-26 20:41:08.994761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.514 [2024-11-26 20:41:08.994772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.514 [2024-11-26 20:41:08.994782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.514 [2024-11-26 20:41:08.994788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.515 [2024-11-26 20:41:08.994906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.515 [2024-11-26 20:41:08.994917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.994990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.994996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.515 [2024-11-26 20:41:08.995000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.515 [2024-11-26 20:41:08.995202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.515 [2024-11-26 20:41:08.995207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.516 [2024-11-26 20:41:08.995550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.516 [2024-11-26 20:41:08.995556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.517 [2024-11-26 20:41:08.995931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.517 [2024-11-26 20:41:08.995936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.995941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.518 [2024-11-26 20:41:08.995946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.995951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.518 [2024-11-26 20:41:08.995955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.995961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.518 [2024-11-26 20:41:08.995965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.995971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.518 [2024-11-26 20:41:08.995975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.995981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.518 [2024-11-26 20:41:08.995985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.995991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.518 [2024-11-26 20:41:08.995995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.996000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19590 is same with the state(6) to be set 00:17:54.518 [2024-11-26 20:41:08.996006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:54.518 [2024-11-26 20:41:08.996009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:54.518 [2024-11-26 20:41:08.996015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89168 len:8 PRP1 0x0 PRP2 0x0 00:17:54.518 [2024-11-26 20:41:08.996019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.518 [2024-11-26 20:41:08.996211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:54.518 [2024-11-26 20:41:08.996253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:54.518 [2024-11-26 20:41:08.996310] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.518 [2024-11-26 20:41:08.996318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86ee50 with addr=10.0.0.3, port=4420 00:17:54.518 [2024-11-26 20:41:08.996325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ee50 is same with the state(6) to be set 00:17:54.518 [2024-11-26 20:41:08.996333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:54.518 [2024-11-26 20:41:08.996341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:54.518 [2024-11-26 20:41:08.996346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:54.518 [2024-11-26 20:41:08.996352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:54.518 [2024-11-26 20:41:08.996357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:54.518 [2024-11-26 20:41:08.996362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:54.518 20:41:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:17:55.507 5514.50 IOPS, 21.54 MiB/s [2024-11-26T20:41:10.062Z] [2024-11-26 20:41:09.996443] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.508 [2024-11-26 20:41:09.996485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86ee50 with addr=10.0.0.3, port=4420 00:17:55.508 [2024-11-26 20:41:09.996492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ee50 is same with the state(6) to be set 00:17:55.508 [2024-11-26 20:41:09.996504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:55.508 [2024-11-26 20:41:09.996513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:55.508 [2024-11-26 20:41:09.996518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:55.508 [2024-11-26 20:41:09.996524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:55.508 [2024-11-26 20:41:09.996529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:55.508 [2024-11-26 20:41:09.996535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:56.695 3676.33 IOPS, 14.36 MiB/s [2024-11-26T20:41:11.250Z] [2024-11-26 20:41:10.996635] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.695 [2024-11-26 20:41:10.996682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86ee50 with addr=10.0.0.3, port=4420 00:17:56.695 [2024-11-26 20:41:10.996690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ee50 is same with the state(6) to be set 00:17:56.695 [2024-11-26 20:41:10.996704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:56.695 [2024-11-26 20:41:10.996714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:56.695 [2024-11-26 20:41:10.996718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:56.695 [2024-11-26 20:41:10.996724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:56.695 [2024-11-26 20:41:10.996730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:56.695 [2024-11-26 20:41:10.996736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:57.625 2757.25 IOPS, 10.77 MiB/s [2024-11-26T20:41:12.180Z] [2024-11-26 20:41:11.999438] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.625 [2024-11-26 20:41:11.999474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86ee50 with addr=10.0.0.3, port=4420 00:17:57.625 [2024-11-26 20:41:11.999481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ee50 is same with the state(6) to be set 00:17:57.625 [2024-11-26 20:41:11.999654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86ee50 (9): Bad file descriptor 00:17:57.625 [2024-11-26 20:41:11.999820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:57.625 [2024-11-26 20:41:11.999825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:57.625 [2024-11-26 20:41:11.999830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:57.625 [2024-11-26 20:41:11.999836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:57.625 [2024-11-26 20:41:11.999841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:57.625 20:41:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:57.881 [2024-11-26 20:41:12.206568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:57.881 20:41:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81324 00:17:58.703 2205.80 IOPS, 8.62 MiB/s [2024-11-26T20:41:13.258Z] [2024-11-26 20:41:13.021603] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:18:00.583 3838.00 IOPS, 14.99 MiB/s [2024-11-26T20:41:16.090Z] 5242.00 IOPS, 20.48 MiB/s [2024-11-26T20:41:17.023Z] 6303.50 IOPS, 24.62 MiB/s [2024-11-26T20:41:17.956Z] 7139.11 IOPS, 27.89 MiB/s [2024-11-26T20:41:17.956Z] 7800.40 IOPS, 30.47 MiB/s 00:18:03.401 Latency(us) 00:18:03.401 [2024-11-26T20:41:17.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.401 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:03.401 Verification LBA range: start 0x0 length 0x4000 00:18:03.401 NVMe0n1 : 10.01 7804.51 30.49 5473.31 0.00 9616.20 415.90 3019898.88 00:18:03.401 [2024-11-26T20:41:17.956Z] =================================================================================================================== 00:18:03.401 [2024-11-26T20:41:17.956Z] Total : 7804.51 30.49 5473.31 0.00 9616.20 0.00 3019898.88 00:18:03.401 { 00:18:03.401 "results": [ 00:18:03.401 { 00:18:03.401 "job": "NVMe0n1", 00:18:03.401 "core_mask": "0x4", 00:18:03.401 "workload": "verify", 00:18:03.401 "status": "finished", 00:18:03.401 "verify_range": { 00:18:03.401 "start": 0, 00:18:03.401 "length": 16384 00:18:03.401 }, 00:18:03.401 "queue_depth": 128, 00:18:03.401 "io_size": 4096, 00:18:03.401 "runtime": 10.006005, 00:18:03.401 "iops": 7804.513389709479, 00:18:03.401 "mibps": 30.486380428552653, 00:18:03.401 "io_failed": 54766, 00:18:03.401 "io_timeout": 0, 00:18:03.401 "avg_latency_us": 9616.203364008074, 00:18:03.401 "min_latency_us": 415.90153846153845, 00:18:03.401 "max_latency_us": 3019898.88 00:18:03.401 } 00:18:03.401 ], 00:18:03.401 "core_count": 1 00:18:03.401 } 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81192 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81192 ']' 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81192 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81192 00:18:03.401 killing process with pid 81192 00:18:03.401 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.401 00:18:03.401 Latency(us) 00:18:03.401 [2024-11-26T20:41:17.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.401 [2024-11-26T20:41:17.956Z] =================================================================================================================== 00:18:03.401 [2024-11-26T20:41:17.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81192' 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81192 00:18:03.401 20:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81192 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81439 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81439 /var/tmp/bdevperf.sock 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81439 ']' 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.682 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:03.682 [2024-11-26 20:41:18.087441] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:18:03.682 [2024-11-26 20:41:18.087506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81439 ] 00:18:03.972 [2024-11-26 20:41:18.223293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.972 [2024-11-26 20:41:18.255283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.972 [2024-11-26 20:41:18.284663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:04.536 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.536 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:04.536 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81454 00:18:04.536 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:04.536 20:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81439 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:04.794 20:41:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:05.052 NVMe0n1 00:18:05.052 20:41:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81491 00:18:05.052 20:41:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:05.052 20:41:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.052 Running I/O for 10 seconds... 00:18:05.985 20:41:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:06.246 20066.00 IOPS, 78.38 MiB/s [2024-11-26T20:41:20.801Z] [2024-11-26 20:41:20.621196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.246 [2024-11-26 20:41:20.621441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.246 [2024-11-26 20:41:20.621454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.246 [2024-11-26 20:41:20.621458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.246 [2024-11-26 20:41:20.621462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with [2024-11-26 20:41:20.621466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:18:06.246 id:0 cdw10:00000000 cdw11:00000000 00:18:06.246 [2024-11-26 20:41:20.621471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.246 [2024-11-26 20:41:20.621472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.246 [2024-11-26 20:41:20.621475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.247 [2024-11-26 20:41:20.621480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cce50 is same [2024-11-26 20:41:20.621488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with with the state(6) to be set 00:18:06.247 the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:18:06.247 [2024-11-26 20:41:20.621765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.247 [2024-11-26 20:41:20.621846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.247 [2024-11-26 20:41:20.621851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.621983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.621987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.248 [2024-11-26 20:41:20.622287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.248 [2024-11-26 20:41:20.622291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.249 [2024-11-26 20:41:20.622709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.249 [2024-11-26 20:41:20.622714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.622994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.622999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.623009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.623019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.623029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.623039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.623049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.623059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.250 [2024-11-26 20:41:20.623070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.250 [2024-11-26 20:41:20.623075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.251 [2024-11-26 20:41:20.623079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:41:20.623085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.251 [2024-11-26 20:41:20.623091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:41:20.623097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.251 [2024-11-26 20:41:20.623102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:41:20.623107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.251 [2024-11-26 20:41:20.623111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:41:20.623117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.251 [2024-11-26 20:41:20.623123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:41:20.623129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1239e20 is same with the state(6) to be set 00:18:06.251 [2024-11-26 20:41:20.623135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:06.251 [2024-11-26 20:41:20.623138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:06.251 [2024-11-26 20:41:20.623142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113904 len:8 PRP1 0x0 PRP2 0x0 00:18:06.251 [2024-11-26 20:41:20.623147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:41:20.623362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:06.251 [2024-11-26 20:41:20.623383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cce50 (9): Bad file descriptor 00:18:06.251 [2024-11-26 20:41:20.623448] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.251 [2024-11-26 20:41:20.623458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cce50 with addr=10.0.0.3, port=4420 00:18:06.251 [2024-11-26 20:41:20.623463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cce50 is same with the state(6) to be set 00:18:06.251 [2024-11-26 20:41:20.623472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cce50 (9): Bad file descriptor 00:18:06.251 [2024-11-26 20:41:20.623480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:06.251 [2024-11-26 20:41:20.623484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:06.251 [2024-11-26 20:41:20.623490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:06.251 [2024-11-26 20:41:20.623495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:06.251 [2024-11-26 20:41:20.623500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:06.251 20:41:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81491 00:18:08.114 10986.50 IOPS, 42.92 MiB/s [2024-11-26T20:41:22.669Z] 7324.33 IOPS, 28.61 MiB/s [2024-11-26T20:41:22.669Z] [2024-11-26 20:41:22.623746] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.114 [2024-11-26 20:41:22.623787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cce50 with addr=10.0.0.3, port=4420 00:18:08.114 [2024-11-26 20:41:22.623794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cce50 is same with the state(6) to be set 00:18:08.114 [2024-11-26 20:41:22.623808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cce50 (9): Bad file descriptor 00:18:08.114 [2024-11-26 20:41:22.623818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:08.114 [2024-11-26 20:41:22.623822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:08.114 [2024-11-26 20:41:22.623827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:08.114 [2024-11-26 20:41:22.623833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:08.114 [2024-11-26 20:41:22.623838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:09.976 5493.25 IOPS, 21.46 MiB/s [2024-11-26T20:41:24.790Z] 4394.60 IOPS, 17.17 MiB/s [2024-11-26T20:41:24.790Z] [2024-11-26 20:41:24.624056] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.235 [2024-11-26 20:41:24.624097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cce50 with addr=10.0.0.3, port=4420 00:18:10.235 [2024-11-26 20:41:24.624104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cce50 is same with the state(6) to be set 00:18:10.235 [2024-11-26 20:41:24.624116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cce50 (9): Bad file descriptor 00:18:10.235 [2024-11-26 20:41:24.624127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:10.235 [2024-11-26 20:41:24.624131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:10.235 [2024-11-26 20:41:24.624137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:10.235 [2024-11-26 20:41:24.624143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:10.235 [2024-11-26 20:41:24.624149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:12.103 3662.17 IOPS, 14.31 MiB/s [2024-11-26T20:41:26.658Z] 3139.00 IOPS, 12.26 MiB/s [2024-11-26T20:41:26.658Z] [2024-11-26 20:41:26.624324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:12.103 [2024-11-26 20:41:26.624366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:12.103 [2024-11-26 20:41:26.624374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:12.103 [2024-11-26 20:41:26.624379] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:18:12.103 [2024-11-26 20:41:26.624385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:13.316 2746.62 IOPS, 10.73 MiB/s 00:18:13.316 Latency(us) 00:18:13.316 [2024-11-26T20:41:27.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.316 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:13.316 NVMe0n1 : 8.10 2711.06 10.59 15.79 0.00 46854.25 6125.10 7020619.62 00:18:13.316 [2024-11-26T20:41:27.871Z] =================================================================================================================== 00:18:13.316 [2024-11-26T20:41:27.871Z] Total : 2711.06 10.59 15.79 0.00 46854.25 6125.10 7020619.62 00:18:13.316 { 00:18:13.316 "results": [ 00:18:13.316 { 00:18:13.316 "job": "NVMe0n1", 00:18:13.316 "core_mask": "0x4", 00:18:13.316 "workload": "randread", 00:18:13.316 "status": "finished", 00:18:13.316 "queue_depth": 128, 00:18:13.316 "io_size": 4096, 00:18:13.316 "runtime": 8.104944, 00:18:13.316 "iops": 2711.0612978942236, 00:18:13.316 "mibps": 10.59008319489931, 00:18:13.316 "io_failed": 128, 00:18:13.316 "io_timeout": 0, 00:18:13.316 "avg_latency_us": 46854.25338804718, 00:18:13.316 "min_latency_us": 6125.095384615384, 00:18:13.316 "max_latency_us": 7020619.618461538 00:18:13.316 } 00:18:13.316 ], 00:18:13.316 "core_count": 1 00:18:13.316 } 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:13.316 Attaching 5 probes... 00:18:13.316 1238.002565: reset bdev controller NVMe0 00:18:13.316 1238.050344: reconnect bdev controller NVMe0 00:18:13.316 3238.318048: reconnect delay bdev controller NVMe0 00:18:13.316 3238.332054: reconnect bdev controller NVMe0 00:18:13.316 5238.632577: reconnect delay bdev controller NVMe0 00:18:13.316 5238.646292: reconnect bdev controller NVMe0 00:18:13.316 7238.954237: reconnect delay bdev controller NVMe0 00:18:13.316 7238.968965: reconnect bdev controller NVMe0 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81454 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81439 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81439 ']' 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81439 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.316 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81439 00:18:13.316 killing process with pid 81439 00:18:13.316 Received shutdown signal, test time was about 8.163269 seconds 00:18:13.316 00:18:13.316 Latency(us) 00:18:13.316 [2024-11-26T20:41:27.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.317 [2024-11-26T20:41:27.872Z] =================================================================================================================== 00:18:13.317 [2024-11-26T20:41:27.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.317 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.317 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.317 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81439' 00:18:13.317 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81439 00:18:13.317 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81439 00:18:13.317 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.612 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:13.612 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:13.612 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:13.612 20:41:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:18:13.612 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.612 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.613 rmmod nvme_tcp 00:18:13.613 rmmod nvme_fabrics 00:18:13.613 rmmod nvme_keyring 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81002 ']' 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81002 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81002 ']' 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81002 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81002 00:18:13.613 killing process with pid 81002 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81002' 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81002 00:18:13.613 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81002 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.871 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.130 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:18:14.130 ************************************ 00:18:14.130 END TEST nvmf_timeout 00:18:14.130 ************************************ 00:18:14.130 00:18:14.130 real 0m45.365s 00:18:14.130 user 2m13.343s 00:18:14.130 sys 0m4.267s 00:18:14.130 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.130 20:41:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:14.130 20:41:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:18:14.130 20:41:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:14.130 ************************************ 00:18:14.130 END TEST nvmf_host 00:18:14.130 ************************************ 00:18:14.130 00:18:14.130 real 4m59.418s 00:18:14.130 user 12m52.295s 00:18:14.130 sys 0m53.664s 00:18:14.130 20:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.130 20:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.130 20:41:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:18:14.130 20:41:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:18:14.130 00:18:14.130 real 11m55.730s 00:18:14.130 user 28m51.702s 00:18:14.130 sys 2m24.634s 00:18:14.130 ************************************ 00:18:14.130 END TEST nvmf_tcp 00:18:14.130 ************************************ 00:18:14.130 20:41:28 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.130 20:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.130 20:41:28 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:18:14.130 20:41:28 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:14.130 20:41:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:14.130 20:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.130 20:41:28 -- common/autotest_common.sh@10 -- # set +x 00:18:14.130 ************************************ 00:18:14.130 START TEST nvmf_dif 00:18:14.130 ************************************ 00:18:14.130 20:41:28 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:14.130 * Looking for test storage... 00:18:14.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:14.130 20:41:28 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.130 20:41:28 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.130 20:41:28 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.389 20:41:28 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.389 20:41:28 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:18:14.389 20:41:28 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.389 20:41:28 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.389 --rc genhtml_branch_coverage=1 00:18:14.389 --rc genhtml_function_coverage=1 00:18:14.389 --rc genhtml_legend=1 00:18:14.389 --rc geninfo_all_blocks=1 00:18:14.389 --rc geninfo_unexecuted_blocks=1 00:18:14.389 00:18:14.389 ' 00:18:14.389 20:41:28 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.389 --rc genhtml_branch_coverage=1 00:18:14.389 --rc genhtml_function_coverage=1 00:18:14.389 --rc genhtml_legend=1 00:18:14.389 --rc geninfo_all_blocks=1 00:18:14.389 --rc geninfo_unexecuted_blocks=1 00:18:14.389 00:18:14.389 ' 00:18:14.389 20:41:28 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.389 --rc genhtml_branch_coverage=1 00:18:14.389 --rc genhtml_function_coverage=1 00:18:14.389 --rc genhtml_legend=1 00:18:14.389 --rc geninfo_all_blocks=1 00:18:14.389 --rc geninfo_unexecuted_blocks=1 00:18:14.389 00:18:14.389 ' 00:18:14.389 20:41:28 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.389 --rc genhtml_branch_coverage=1 00:18:14.389 --rc genhtml_function_coverage=1 00:18:14.389 --rc genhtml_legend=1 00:18:14.389 --rc geninfo_all_blocks=1 00:18:14.389 --rc geninfo_unexecuted_blocks=1 00:18:14.389 00:18:14.389 ' 00:18:14.389 20:41:28 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.389 20:41:28 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.390 20:41:28 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.390 20:41:28 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.390 20:41:28 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.390 20:41:28 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.390 20:41:28 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.390 20:41:28 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.390 20:41:28 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.390 20:41:28 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:14.390 20:41:28 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.390 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.390 20:41:28 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:14.390 20:41:28 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:14.390 20:41:28 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:14.390 20:41:28 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:14.390 20:41:28 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.390 20:41:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:14.390 20:41:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:14.390 Cannot find device "nvmf_init_br" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:14.390 Cannot find device "nvmf_init_br2" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:14.390 Cannot find device "nvmf_tgt_br" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@164 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.390 Cannot find device "nvmf_tgt_br2" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@165 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:14.390 Cannot find device "nvmf_init_br" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@166 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:14.390 Cannot find device "nvmf_init_br2" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@167 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:14.390 Cannot find device "nvmf_tgt_br" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@168 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:14.390 Cannot find device "nvmf_tgt_br2" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@169 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:14.390 Cannot find device "nvmf_br" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@170 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:14.390 Cannot find device "nvmf_init_if" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@171 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:14.390 Cannot find device "nvmf_init_if2" 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@172 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@173 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@174 -- # true 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.390 20:41:28 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:14.391 20:41:28 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:14.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:14.649 00:18:14.649 --- 10.0.0.3 ping statistics --- 00:18:14.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.649 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:14.649 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:14.649 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:18:14.649 00:18:14.649 --- 10.0.0.4 ping statistics --- 00:18:14.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.649 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:18:14.649 00:18:14.649 --- 10.0.0.1 ping statistics --- 00:18:14.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.649 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:14.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:14.649 00:18:14.649 --- 10.0.0.2 ping statistics --- 00:18:14.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.649 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:18:14.649 20:41:28 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:14.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.907 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.907 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.907 20:41:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:14.907 20:41:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:14.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=81983 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 81983 00:18:14.907 20:41:29 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 81983 ']' 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.907 20:41:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:14.907 [2024-11-26 20:41:29.328490] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:18:14.907 [2024-11-26 20:41:29.328544] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.907 [2024-11-26 20:41:29.460336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.165 [2024-11-26 20:41:29.505385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.165 [2024-11-26 20:41:29.505429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.165 [2024-11-26 20:41:29.505437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.165 [2024-11-26 20:41:29.505443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.165 [2024-11-26 20:41:29.505449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.165 [2024-11-26 20:41:29.505798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.165 [2024-11-26 20:41:29.534924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.731 20:41:30 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:18:15.732 20:41:30 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:15.732 20:41:30 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.732 20:41:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:15.732 20:41:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:15.732 [2024-11-26 20:41:30.225882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.732 20:41:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.732 20:41:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:15.732 ************************************ 00:18:15.732 START TEST fio_dif_1_default 00:18:15.732 ************************************ 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.732 bdev_null0 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.732 [2024-11-26 20:41:30.265953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:15.732 { 00:18:15.732 "params": { 00:18:15.732 "name": "Nvme$subsystem", 00:18:15.732 "trtype": "$TEST_TRANSPORT", 00:18:15.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.732 "adrfam": "ipv4", 00:18:15.732 "trsvcid": "$NVMF_PORT", 00:18:15.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.732 "hdgst": ${hdgst:-false}, 00:18:15.732 "ddgst": ${ddgst:-false} 00:18:15.732 }, 00:18:15.732 "method": "bdev_nvme_attach_controller" 00:18:15.732 } 00:18:15.732 EOF 00:18:15.732 )") 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:18:15.732 20:41:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:15.732 "params": { 00:18:15.732 "name": "Nvme0", 00:18:15.732 "trtype": "tcp", 00:18:15.732 "traddr": "10.0.0.3", 00:18:15.732 "adrfam": "ipv4", 00:18:15.732 "trsvcid": "4420", 00:18:15.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:15.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:15.732 "hdgst": false, 00:18:15.732 "ddgst": false 00:18:15.732 }, 00:18:15.732 "method": "bdev_nvme_attach_controller" 00:18:15.732 }' 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:15.990 20:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:15.990 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:15.990 fio-3.35 00:18:15.990 Starting 1 thread 00:18:28.186 00:18:28.186 filename0: (groupid=0, jobs=1): err= 0: pid=82044: Tue Nov 26 20:41:40 2024 00:18:28.186 read: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(462MiB/10001msec) 00:18:28.186 slat (nsec): min=5424, max=49367, avg=6320.09, stdev=1241.18 00:18:28.186 clat (usec): min=97, max=4144, avg=321.28, stdev=43.19 00:18:28.186 lat (usec): min=103, max=4166, avg=327.60, stdev=43.17 00:18:28.186 clat percentiles (usec): 00:18:28.186 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:18:28.186 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:18:28.186 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 379], 00:18:28.186 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 545], 99.95th=[ 758], 00:18:28.186 | 99.99th=[ 1172] 00:18:28.186 bw ( KiB/s): min=40960, max=51616, per=100.00%, avg=47491.37, stdev=3727.99, samples=19 00:18:28.186 iops : min=10240, max=12904, avg=11872.84, stdev=932.00, samples=19 00:18:28.186 lat (usec) : 100=0.01%, 500=99.86%, 750=0.08%, 1000=0.04% 00:18:28.186 lat (msec) : 2=0.01%, 10=0.01% 00:18:28.186 cpu : usr=88.90%, sys=9.96%, ctx=26, majf=0, minf=9 00:18:28.186 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.186 issued rwts: total=118177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.186 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:28.186 00:18:28.186 Run status group 0 (all jobs): 00:18:28.186 READ: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=462MiB (484MB), run=10001-10001msec 00:18:28.186 20:41:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:28.186 20:41:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:18:28.186 20:41:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:18:28.186 20:41:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:28.186 20:41:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:18:28.186 20:41:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 ************************************ 00:18:28.186 END TEST fio_dif_1_default 00:18:28.186 ************************************ 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 00:18:28.186 real 0m10.780s 00:18:28.186 user 0m9.354s 00:18:28.186 sys 0m1.177s 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 20:41:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:28.186 20:41:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:28.186 20:41:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 ************************************ 00:18:28.186 START TEST fio_dif_1_multi_subsystems 00:18:28.186 ************************************ 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 bdev_null0 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 [2024-11-26 20:41:41.083581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 bdev_null1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.186 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:28.187 { 00:18:28.187 "params": { 00:18:28.187 "name": "Nvme$subsystem", 00:18:28.187 "trtype": "$TEST_TRANSPORT", 00:18:28.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.187 "adrfam": "ipv4", 00:18:28.187 "trsvcid": "$NVMF_PORT", 00:18:28.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.187 "hdgst": ${hdgst:-false}, 00:18:28.187 "ddgst": ${ddgst:-false} 00:18:28.187 }, 00:18:28.187 "method": "bdev_nvme_attach_controller" 00:18:28.187 } 00:18:28.187 EOF 00:18:28.187 )") 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:28.187 { 00:18:28.187 "params": { 00:18:28.187 "name": "Nvme$subsystem", 00:18:28.187 "trtype": "$TEST_TRANSPORT", 00:18:28.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.187 "adrfam": "ipv4", 00:18:28.187 "trsvcid": "$NVMF_PORT", 00:18:28.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.187 "hdgst": ${hdgst:-false}, 00:18:28.187 "ddgst": ${ddgst:-false} 00:18:28.187 }, 00:18:28.187 "method": "bdev_nvme_attach_controller" 00:18:28.187 } 00:18:28.187 EOF 00:18:28.187 )") 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:28.187 "params": { 00:18:28.187 "name": "Nvme0", 00:18:28.187 "trtype": "tcp", 00:18:28.187 "traddr": "10.0.0.3", 00:18:28.187 "adrfam": "ipv4", 00:18:28.187 "trsvcid": "4420", 00:18:28.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:28.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:28.187 "hdgst": false, 00:18:28.187 "ddgst": false 00:18:28.187 }, 00:18:28.187 "method": "bdev_nvme_attach_controller" 00:18:28.187 },{ 00:18:28.187 "params": { 00:18:28.187 "name": "Nvme1", 00:18:28.187 "trtype": "tcp", 00:18:28.187 "traddr": "10.0.0.3", 00:18:28.187 "adrfam": "ipv4", 00:18:28.187 "trsvcid": "4420", 00:18:28.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.187 "hdgst": false, 00:18:28.187 "ddgst": false 00:18:28.187 }, 00:18:28.187 "method": "bdev_nvme_attach_controller" 00:18:28.187 }' 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:28.187 20:41:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.187 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:28.187 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:28.187 fio-3.35 00:18:28.187 Starting 2 threads 00:18:38.268 00:18:38.268 filename0: (groupid=0, jobs=1): err= 0: pid=82211: Tue Nov 26 20:41:51 2024 00:18:38.268 read: IOPS=6913, BW=27.0MiB/s (28.3MB/s)(270MiB/10001msec) 00:18:38.268 slat (nsec): min=5457, max=37588, avg=8298.86, stdev=4487.53 00:18:38.268 clat (usec): min=491, max=883, avg=556.29, stdev=24.18 00:18:38.268 lat (usec): min=497, max=909, avg=564.59, stdev=25.84 00:18:38.268 clat percentiles (usec): 00:18:38.268 | 1.00th=[ 510], 5.00th=[ 523], 10.00th=[ 529], 20.00th=[ 537], 00:18:38.268 | 30.00th=[ 545], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:18:38.268 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 603], 00:18:38.268 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 676], 99.95th=[ 685], 00:18:38.268 | 99.99th=[ 717] 00:18:38.268 bw ( KiB/s): min=26475, max=28928, per=50.01%, avg=27660.37, stdev=665.91, samples=19 00:18:38.268 iops : min= 6618, max= 7232, avg=6915.05, stdev=166.55, samples=19 00:18:38.268 lat (usec) : 500=0.09%, 750=99.90%, 1000=0.01% 00:18:38.268 cpu : usr=91.45%, sys=7.80%, ctx=7, majf=0, minf=0 00:18:38.268 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:38.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.268 issued rwts: total=69140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.268 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:38.268 filename1: (groupid=0, jobs=1): err= 0: pid=82212: Tue Nov 26 20:41:51 2024 00:18:38.268 read: IOPS=6913, BW=27.0MiB/s (28.3MB/s)(270MiB/10001msec) 00:18:38.268 slat (nsec): min=5504, max=37239, avg=8722.07, stdev=4381.56 00:18:38.268 clat (usec): min=295, max=754, avg=555.51, stdev=29.31 00:18:38.268 lat (usec): min=300, max=783, avg=564.23, stdev=30.68 00:18:38.268 clat percentiles (usec): 00:18:38.268 | 1.00th=[ 486], 5.00th=[ 502], 10.00th=[ 519], 20.00th=[ 537], 00:18:38.268 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 553], 60.00th=[ 562], 00:18:38.268 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 603], 00:18:38.268 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 676], 99.95th=[ 693], 00:18:38.268 | 99.99th=[ 717] 00:18:38.268 bw ( KiB/s): min=26528, max=28960, per=50.02%, avg=27663.16, stdev=661.62, samples=19 00:18:38.268 iops : min= 6632, max= 7240, avg=6915.79, stdev=165.41, samples=19 00:18:38.268 lat (usec) : 500=4.28%, 750=95.72%, 1000=0.01% 00:18:38.268 cpu : usr=91.19%, sys=8.01%, ctx=23, majf=0, minf=0 00:18:38.268 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:38.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.268 issued rwts: total=69144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.268 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:38.268 00:18:38.268 Run status group 0 (all jobs): 00:18:38.268 READ: bw=54.0MiB/s (56.6MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=540MiB (566MB), run=10001-10001msec 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 ************************************ 00:18:38.268 END TEST fio_dif_1_multi_subsystems 00:18:38.268 ************************************ 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 00:18:38.268 real 0m10.940s 00:18:38.268 user 0m18.861s 00:18:38.268 sys 0m1.781s 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.268 20:41:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 20:41:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:38.268 20:41:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.268 20:41:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.268 20:41:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 ************************************ 00:18:38.268 START TEST fio_dif_rand_params 00:18:38.268 ************************************ 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 bdev_null0 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 [2024-11-26 20:41:52.066383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.268 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:38.268 { 00:18:38.268 "params": { 00:18:38.268 "name": "Nvme$subsystem", 00:18:38.268 "trtype": "$TEST_TRANSPORT", 00:18:38.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.268 "adrfam": "ipv4", 00:18:38.268 "trsvcid": "$NVMF_PORT", 00:18:38.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.268 "hdgst": ${hdgst:-false}, 00:18:38.268 "ddgst": ${ddgst:-false} 00:18:38.268 }, 00:18:38.268 "method": "bdev_nvme_attach_controller" 00:18:38.268 } 00:18:38.269 EOF 00:18:38.269 )") 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:38.269 "params": { 00:18:38.269 "name": "Nvme0", 00:18:38.269 "trtype": "tcp", 00:18:38.269 "traddr": "10.0.0.3", 00:18:38.269 "adrfam": "ipv4", 00:18:38.269 "trsvcid": "4420", 00:18:38.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:38.269 "hdgst": false, 00:18:38.269 "ddgst": false 00:18:38.269 }, 00:18:38.269 "method": "bdev_nvme_attach_controller" 00:18:38.269 }' 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.269 20:41:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.269 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:38.269 ... 00:18:38.269 fio-3.35 00:18:38.269 Starting 3 threads 00:18:43.528 00:18:43.528 filename0: (groupid=0, jobs=1): err= 0: pid=82372: Tue Nov 26 20:41:57 2024 00:18:43.528 read: IOPS=342, BW=42.8MiB/s (44.9MB/s)(214MiB/5006msec) 00:18:43.528 slat (nsec): min=5553, max=31824, avg=9432.90, stdev=5306.22 00:18:43.528 clat (usec): min=5869, max=9215, avg=8745.82, stdev=171.33 00:18:43.528 lat (usec): min=5875, max=9244, avg=8755.25, stdev=171.61 00:18:43.528 clat percentiles (usec): 00:18:43.528 | 1.00th=[ 8586], 5.00th=[ 8717], 10.00th=[ 8717], 20.00th=[ 8717], 00:18:43.528 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8717], 00:18:43.528 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8848], 95.00th=[ 8848], 00:18:43.528 | 99.00th=[ 8848], 99.50th=[ 8848], 99.90th=[ 9241], 99.95th=[ 9241], 00:18:43.528 | 99.99th=[ 9241] 00:18:43.528 bw ( KiB/s): min=43008, max=44544, per=33.31%, avg=43776.00, stdev=362.04, samples=10 00:18:43.528 iops : min= 336, max= 348, avg=342.00, stdev= 2.83, samples=10 00:18:43.528 lat (msec) : 10=100.00% 00:18:43.528 cpu : usr=92.87%, sys=6.75%, ctx=9, majf=0, minf=0 00:18:43.528 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.528 issued rwts: total=1713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:43.528 filename0: (groupid=0, jobs=1): err= 0: pid=82373: Tue Nov 26 20:41:57 2024 00:18:43.528 read: IOPS=342, BW=42.8MiB/s (44.9MB/s)(214MiB/5001msec) 00:18:43.528 slat (nsec): min=5585, max=32162, avg=10041.73, stdev=5653.49 00:18:43.528 clat (usec): min=3283, max=9011, avg=8734.72, stdev=314.09 00:18:43.528 lat (usec): min=3305, max=9019, avg=8744.76, stdev=313.83 00:18:43.528 clat percentiles (usec): 00:18:43.528 | 1.00th=[ 8586], 5.00th=[ 8717], 10.00th=[ 8717], 20.00th=[ 8717], 00:18:43.528 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8717], 00:18:43.528 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 8848], 95.00th=[ 8848], 00:18:43.528 | 99.00th=[ 8848], 99.50th=[ 8848], 99.90th=[ 8979], 99.95th=[ 8979], 00:18:43.528 | 99.99th=[ 8979] 00:18:43.528 bw ( KiB/s): min=43776, max=44544, per=33.38%, avg=43861.33, stdev=256.00, samples=9 00:18:43.528 iops : min= 342, max= 348, avg=342.67, stdev= 2.00, samples=9 00:18:43.528 lat (msec) : 4=0.35%, 10=99.65% 00:18:43.528 cpu : usr=93.16%, sys=6.42%, ctx=4, majf=0, minf=0 00:18:43.528 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.528 issued rwts: total=1713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:43.528 filename0: (groupid=0, jobs=1): err= 0: pid=82374: Tue Nov 26 20:41:57 2024 00:18:43.528 read: IOPS=342, BW=42.8MiB/s (44.9MB/s)(214MiB/5006msec) 00:18:43.528 slat (nsec): min=5553, max=56275, avg=10039.47, stdev=5923.67 00:18:43.528 clat (usec): min=5867, max=9094, avg=8743.84, stdev=170.96 00:18:43.528 lat (usec): min=5873, max=9150, avg=8753.88, stdev=171.23 00:18:43.528 clat percentiles (usec): 00:18:43.528 | 1.00th=[ 8586], 5.00th=[ 8717], 10.00th=[ 8717], 20.00th=[ 8717], 00:18:43.528 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8717], 00:18:43.528 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8848], 95.00th=[ 8848], 00:18:43.528 | 99.00th=[ 8848], 99.50th=[ 8848], 99.90th=[ 9110], 99.95th=[ 9110], 00:18:43.528 | 99.99th=[ 9110] 00:18:43.528 bw ( KiB/s): min=43008, max=44544, per=33.31%, avg=43776.00, stdev=362.04, samples=10 00:18:43.528 iops : min= 336, max= 348, avg=342.00, stdev= 2.83, samples=10 00:18:43.528 lat (msec) : 10=100.00% 00:18:43.528 cpu : usr=93.67%, sys=5.95%, ctx=43, majf=0, minf=0 00:18:43.528 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.528 issued rwts: total=1713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:43.528 00:18:43.528 Run status group 0 (all jobs): 00:18:43.528 READ: bw=128MiB/s (135MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=642MiB (674MB), run=5001-5006msec 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:43.528 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 bdev_null0 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 [2024-11-26 20:41:57.862927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 bdev_null1 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 bdev_null2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:43.529 { 00:18:43.529 "params": { 00:18:43.529 "name": "Nvme$subsystem", 00:18:43.529 "trtype": "$TEST_TRANSPORT", 00:18:43.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.529 "adrfam": "ipv4", 00:18:43.529 "trsvcid": "$NVMF_PORT", 00:18:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.529 "hdgst": ${hdgst:-false}, 00:18:43.529 "ddgst": ${ddgst:-false} 00:18:43.529 }, 00:18:43.529 "method": "bdev_nvme_attach_controller" 00:18:43.529 } 00:18:43.529 EOF 00:18:43.529 )") 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:43.529 { 00:18:43.529 "params": { 00:18:43.529 "name": "Nvme$subsystem", 00:18:43.529 "trtype": "$TEST_TRANSPORT", 00:18:43.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.529 "adrfam": "ipv4", 00:18:43.529 "trsvcid": "$NVMF_PORT", 00:18:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.529 "hdgst": ${hdgst:-false}, 00:18:43.529 "ddgst": ${ddgst:-false} 00:18:43.529 }, 00:18:43.529 "method": "bdev_nvme_attach_controller" 00:18:43.529 } 00:18:43.529 EOF 00:18:43.529 )") 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:43.529 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:43.530 { 00:18:43.530 "params": { 00:18:43.530 "name": "Nvme$subsystem", 00:18:43.530 "trtype": "$TEST_TRANSPORT", 00:18:43.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.530 "adrfam": "ipv4", 00:18:43.530 "trsvcid": "$NVMF_PORT", 00:18:43.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.530 "hdgst": ${hdgst:-false}, 00:18:43.530 "ddgst": ${ddgst:-false} 00:18:43.530 }, 00:18:43.530 "method": "bdev_nvme_attach_controller" 00:18:43.530 } 00:18:43.530 EOF 00:18:43.530 )") 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:43.530 "params": { 00:18:43.530 "name": "Nvme0", 00:18:43.530 "trtype": "tcp", 00:18:43.530 "traddr": "10.0.0.3", 00:18:43.530 "adrfam": "ipv4", 00:18:43.530 "trsvcid": "4420", 00:18:43.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:43.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:43.530 "hdgst": false, 00:18:43.530 "ddgst": false 00:18:43.530 }, 00:18:43.530 "method": "bdev_nvme_attach_controller" 00:18:43.530 },{ 00:18:43.530 "params": { 00:18:43.530 "name": "Nvme1", 00:18:43.530 "trtype": "tcp", 00:18:43.530 "traddr": "10.0.0.3", 00:18:43.530 "adrfam": "ipv4", 00:18:43.530 "trsvcid": "4420", 00:18:43.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.530 "hdgst": false, 00:18:43.530 "ddgst": false 00:18:43.530 }, 00:18:43.530 "method": "bdev_nvme_attach_controller" 00:18:43.530 },{ 00:18:43.530 "params": { 00:18:43.530 "name": "Nvme2", 00:18:43.530 "trtype": "tcp", 00:18:43.530 "traddr": "10.0.0.3", 00:18:43.530 "adrfam": "ipv4", 00:18:43.530 "trsvcid": "4420", 00:18:43.530 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:43.530 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:43.530 "hdgst": false, 00:18:43.530 "ddgst": false 00:18:43.530 }, 00:18:43.530 "method": "bdev_nvme_attach_controller" 00:18:43.530 }' 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.530 20:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.787 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:43.787 ... 00:18:43.787 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:43.787 ... 00:18:43.787 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:43.787 ... 00:18:43.787 fio-3.35 00:18:43.787 Starting 24 threads 00:18:56.003 00:18:56.003 filename0: (groupid=0, jobs=1): err= 0: pid=82470: Tue Nov 26 20:42:08 2024 00:18:56.003 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.5MiB/10031msec) 00:18:56.003 slat (usec): min=3, max=8013, avg=20.67, stdev=223.98 00:18:56.003 clat (usec): min=14439, max=96773, avg=50127.00, stdev=14311.31 00:18:56.003 lat (usec): min=14451, max=96780, avg=50147.67, stdev=14309.26 00:18:56.003 clat percentiles (usec): 00:18:56.003 | 1.00th=[17171], 5.00th=[24773], 10.00th=[32113], 20.00th=[37487], 00:18:56.003 | 30.00th=[40109], 40.00th=[47973], 50.00th=[51643], 60.00th=[55313], 00:18:56.003 | 70.00th=[56361], 80.00th=[61080], 90.00th=[66847], 95.00th=[74974], 00:18:56.003 | 99.00th=[85459], 99.50th=[86508], 99.90th=[94897], 99.95th=[94897], 00:18:56.003 | 99.99th=[96994] 00:18:56.003 bw ( KiB/s): min= 1088, max= 1552, per=4.13%, avg=1270.15, stdev=116.15, samples=20 00:18:56.003 iops : min= 272, max= 388, avg=317.50, stdev=28.95, samples=20 00:18:56.003 lat (msec) : 20=1.44%, 50=44.82%, 100=53.74% 00:18:56.003 cpu : usr=44.16%, sys=1.32%, ctx=1331, majf=0, minf=9 00:18:56.003 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:18:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.003 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.003 issued rwts: total=3193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.003 filename0: (groupid=0, jobs=1): err= 0: pid=82471: Tue Nov 26 20:42:08 2024 00:18:56.003 read: IOPS=329, BW=1316KiB/s (1348kB/s)(12.9MiB/10036msec) 00:18:56.003 slat (usec): min=2, max=12017, avg=18.33, stdev=245.65 00:18:56.003 clat (msec): min=14, max=104, avg=48.50, stdev=14.71 00:18:56.003 lat (msec): min=14, max=104, avg=48.52, stdev=14.72 00:18:56.003 clat percentiles (msec): 00:18:56.003 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 36], 00:18:56.003 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 53], 00:18:56.003 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 73], 00:18:56.003 | 99.00th=[ 85], 99.50th=[ 88], 99.90th=[ 95], 99.95th=[ 95], 00:18:56.003 | 99.99th=[ 105] 00:18:56.003 bw ( KiB/s): min= 1120, max= 1992, per=4.28%, avg=1314.40, stdev=231.97, samples=20 00:18:56.003 iops : min= 280, max= 498, avg=328.60, stdev=57.99, samples=20 00:18:56.003 lat (msec) : 20=3.15%, 50=48.70%, 100=48.12%, 250=0.03% 00:18:56.003 cpu : usr=39.86%, sys=1.20%, ctx=1192, majf=0, minf=9 00:18:56.003 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:18:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.003 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.003 issued rwts: total=3302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.003 filename0: (groupid=0, jobs=1): err= 0: pid=82472: Tue Nov 26 20:42:08 2024 00:18:56.003 read: IOPS=322, BW=1288KiB/s (1319kB/s)(12.6MiB/10053msec) 00:18:56.003 slat (usec): min=3, max=8013, avg=14.31, stdev=202.33 00:18:56.003 clat (usec): min=481, max=111938, avg=49531.64, stdev=16294.62 00:18:56.003 lat (usec): min=489, max=111944, avg=49545.95, stdev=16292.57 00:18:56.003 clat percentiles (msec): 00:18:56.003 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 37], 00:18:56.003 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 56], 00:18:56.003 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 73], 00:18:56.003 | 99.00th=[ 85], 99.50th=[ 88], 99.90th=[ 96], 99.95th=[ 109], 00:18:56.003 | 99.99th=[ 112] 00:18:56.003 bw ( KiB/s): min= 1080, max= 2328, per=4.19%, avg=1288.60, stdev=300.60, samples=20 00:18:56.003 iops : min= 270, max= 582, avg=322.15, stdev=75.15, samples=20 00:18:56.003 lat (usec) : 500=0.06% 00:18:56.003 lat (msec) : 4=1.73%, 10=1.67%, 20=1.95%, 50=46.57%, 100=47.96% 00:18:56.003 lat (msec) : 250=0.06% 00:18:56.003 cpu : usr=35.73%, sys=1.11%, ctx=1004, majf=0, minf=9 00:18:56.003 IO depths : 1=0.2%, 2=0.6%, 4=1.7%, 8=80.7%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.003 complete : 0=0.0%, 4=88.4%, 8=11.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.003 issued rwts: total=3238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.003 filename0: (groupid=0, jobs=1): err= 0: pid=82473: Tue Nov 26 20:42:08 2024 00:18:56.003 read: IOPS=309, BW=1240KiB/s (1270kB/s)(12.1MiB/10026msec) 00:18:56.003 slat (usec): min=3, max=4018, avg=12.93, stdev=86.44 00:18:56.004 clat (msec): min=15, max=113, avg=51.51, stdev=13.63 00:18:56.004 lat (msec): min=15, max=113, avg=51.52, stdev=13.63 00:18:56.004 clat percentiles (msec): 00:18:56.004 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 40], 00:18:56.004 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 53], 60.00th=[ 56], 00:18:56.004 | 70.00th=[ 58], 80.00th=[ 62], 90.00th=[ 68], 95.00th=[ 75], 00:18:56.004 | 99.00th=[ 85], 99.50th=[ 88], 99.90th=[ 102], 99.95th=[ 108], 00:18:56.004 | 99.99th=[ 114] 00:18:56.004 bw ( KiB/s): min= 1016, max= 1648, per=4.03%, avg=1239.20, stdev=130.63, samples=20 00:18:56.004 iops : min= 254, max= 412, avg=309.80, stdev=32.66, samples=20 00:18:56.004 lat (msec) : 20=0.51%, 50=43.05%, 100=56.15%, 250=0.29% 00:18:56.004 cpu : usr=43.64%, sys=1.56%, ctx=1579, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename0: (groupid=0, jobs=1): err= 0: pid=82474: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=326, BW=1308KiB/s (1339kB/s)(12.8MiB/10028msec) 00:18:56.004 slat (usec): min=3, max=8014, avg=23.06, stdev=231.39 00:18:56.004 clat (usec): min=12779, max=95913, avg=48807.45, stdev=13437.38 00:18:56.004 lat (usec): min=12786, max=95923, avg=48830.50, stdev=13437.06 00:18:56.004 clat percentiles (usec): 00:18:56.004 | 1.00th=[22414], 5.00th=[27395], 10.00th=[32375], 20.00th=[36439], 00:18:56.004 | 30.00th=[39060], 40.00th=[45351], 50.00th=[49021], 60.00th=[52691], 00:18:56.004 | 70.00th=[56361], 80.00th=[60031], 90.00th=[64226], 95.00th=[72877], 00:18:56.004 | 99.00th=[82314], 99.50th=[84411], 99.90th=[95945], 99.95th=[95945], 00:18:56.004 | 99.99th=[95945] 00:18:56.004 bw ( KiB/s): min= 1120, max= 1800, per=4.25%, avg=1306.10, stdev=138.11, samples=20 00:18:56.004 iops : min= 280, max= 450, avg=326.50, stdev=34.54, samples=20 00:18:56.004 lat (msec) : 20=0.43%, 50=51.66%, 100=47.91% 00:18:56.004 cpu : usr=41.81%, sys=1.38%, ctx=1345, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename0: (groupid=0, jobs=1): err= 0: pid=82475: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=318, BW=1273KiB/s (1303kB/s)(12.5MiB/10020msec) 00:18:56.004 slat (usec): min=3, max=9019, avg=19.87, stdev=256.43 00:18:56.004 clat (usec): min=16916, max=94302, avg=50178.47, stdev=13578.35 00:18:56.004 lat (usec): min=16924, max=94311, avg=50198.34, stdev=13583.35 00:18:56.004 clat percentiles (usec): 00:18:56.004 | 1.00th=[21890], 5.00th=[25560], 10.00th=[33817], 20.00th=[36439], 00:18:56.004 | 30.00th=[42730], 40.00th=[47973], 50.00th=[50070], 60.00th=[54789], 00:18:56.004 | 70.00th=[57934], 80.00th=[60031], 90.00th=[67634], 95.00th=[71828], 00:18:56.004 | 99.00th=[83362], 99.50th=[85459], 99.90th=[91751], 99.95th=[93848], 00:18:56.004 | 99.99th=[93848] 00:18:56.004 bw ( KiB/s): min= 1120, max= 1542, per=4.13%, avg=1268.30, stdev=125.87, samples=20 00:18:56.004 iops : min= 280, max= 385, avg=317.05, stdev=31.41, samples=20 00:18:56.004 lat (msec) : 20=0.50%, 50=49.59%, 100=49.91% 00:18:56.004 cpu : usr=35.92%, sys=1.03%, ctx=1075, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=79.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename0: (groupid=0, jobs=1): err= 0: pid=82476: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=308, BW=1235KiB/s (1265kB/s)(12.1MiB/10048msec) 00:18:56.004 slat (usec): min=2, max=2613, avg= 9.05, stdev=46.98 00:18:56.004 clat (msec): min=9, max=108, avg=51.75, stdev=14.26 00:18:56.004 lat (msec): min=9, max=108, avg=51.76, stdev=14.27 00:18:56.004 clat percentiles (msec): 00:18:56.004 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 40], 00:18:56.004 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:18:56.004 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 72], 00:18:56.004 | 99.00th=[ 85], 99.50th=[ 88], 99.90th=[ 96], 99.95th=[ 100], 00:18:56.004 | 99.99th=[ 109] 00:18:56.004 bw ( KiB/s): min= 1040, max= 1781, per=4.01%, avg=1233.75, stdev=172.05, samples=20 00:18:56.004 iops : min= 260, max= 445, avg=308.40, stdev=42.97, samples=20 00:18:56.004 lat (msec) : 10=0.52%, 20=2.06%, 50=45.09%, 100=52.30%, 250=0.03% 00:18:56.004 cpu : usr=34.16%, sys=1.25%, ctx=903, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=78.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=89.0%, 8=10.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename0: (groupid=0, jobs=1): err= 0: pid=82477: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=311, BW=1245KiB/s (1274kB/s)(12.2MiB/10031msec) 00:18:56.004 slat (usec): min=4, max=8040, avg=19.32, stdev=234.85 00:18:56.004 clat (usec): min=17642, max=96083, avg=51320.92, stdev=13608.19 00:18:56.004 lat (usec): min=17647, max=96089, avg=51340.24, stdev=13609.49 00:18:56.004 clat percentiles (usec): 00:18:56.004 | 1.00th=[20579], 5.00th=[25560], 10.00th=[34866], 20.00th=[38011], 00:18:56.004 | 30.00th=[47973], 40.00th=[47973], 50.00th=[51119], 60.00th=[55837], 00:18:56.004 | 70.00th=[59507], 80.00th=[60031], 90.00th=[68682], 95.00th=[72877], 00:18:56.004 | 99.00th=[84411], 99.50th=[88605], 99.90th=[95945], 99.95th=[95945], 00:18:56.004 | 99.99th=[95945] 00:18:56.004 bw ( KiB/s): min= 1096, max= 1520, per=4.04%, avg=1242.15, stdev=114.48, samples=20 00:18:56.004 iops : min= 274, max= 380, avg=310.50, stdev=28.53, samples=20 00:18:56.004 lat (msec) : 20=0.90%, 50=46.07%, 100=53.03% 00:18:56.004 cpu : usr=36.81%, sys=1.06%, ctx=1018, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=1.0%, 4=3.7%, 8=79.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename1: (groupid=0, jobs=1): err= 0: pid=82478: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.5MiB/10034msec) 00:18:56.004 slat (usec): min=3, max=4019, avg=15.12, stdev=137.93 00:18:56.004 clat (msec): min=12, max=108, avg=50.14, stdev=13.25 00:18:56.004 lat (msec): min=12, max=108, avg=50.15, stdev=13.24 00:18:56.004 clat percentiles (msec): 00:18:56.004 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 38], 00:18:56.004 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 55], 00:18:56.004 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 65], 95.00th=[ 73], 00:18:56.004 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 96], 99.95th=[ 99], 00:18:56.004 | 99.99th=[ 109] 00:18:56.004 bw ( KiB/s): min= 1072, max= 1552, per=4.14%, avg=1271.35, stdev=126.61, samples=20 00:18:56.004 iops : min= 268, max= 388, avg=317.80, stdev=31.57, samples=20 00:18:56.004 lat (msec) : 20=0.06%, 50=49.69%, 100=50.22%, 250=0.03% 00:18:56.004 cpu : usr=40.04%, sys=1.23%, ctx=1137, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename1: (groupid=0, jobs=1): err= 0: pid=82479: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=316, BW=1267KiB/s (1297kB/s)(12.4MiB/10008msec) 00:18:56.004 slat (usec): min=3, max=8018, avg=18.54, stdev=246.28 00:18:56.004 clat (msec): min=12, max=110, avg=50.39, stdev=13.28 00:18:56.004 lat (msec): min=12, max=110, avg=50.40, stdev=13.27 00:18:56.004 clat percentiles (msec): 00:18:56.004 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 36], 00:18:56.004 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 52], 00:18:56.004 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 72], 00:18:56.004 | 99.00th=[ 85], 99.50th=[ 85], 99.90th=[ 96], 99.95th=[ 97], 00:18:56.004 | 99.99th=[ 111] 00:18:56.004 bw ( KiB/s): min= 1088, max= 1552, per=4.11%, avg=1264.00, stdev=107.63, samples=19 00:18:56.004 iops : min= 272, max= 388, avg=316.00, stdev=26.91, samples=19 00:18:56.004 lat (msec) : 20=0.25%, 50=54.98%, 100=44.73%, 250=0.03% 00:18:56.004 cpu : usr=31.84%, sys=1.08%, ctx=878, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename1: (groupid=0, jobs=1): err= 0: pid=82480: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=317, BW=1268KiB/s (1299kB/s)(12.4MiB/10049msec) 00:18:56.004 slat (usec): min=3, max=3868, avg=12.13, stdev=85.05 00:18:56.004 clat (msec): min=12, max=106, avg=50.38, stdev=14.09 00:18:56.004 lat (msec): min=12, max=106, avg=50.40, stdev=14.09 00:18:56.004 clat percentiles (msec): 00:18:56.004 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 39], 00:18:56.004 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 55], 00:18:56.004 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 67], 95.00th=[ 75], 00:18:56.004 | 99.00th=[ 85], 99.50th=[ 88], 99.90th=[ 95], 99.95th=[ 106], 00:18:56.004 | 99.99th=[ 107] 00:18:56.004 bw ( KiB/s): min= 1088, max= 1744, per=4.12%, avg=1267.60, stdev=162.33, samples=20 00:18:56.004 iops : min= 272, max= 436, avg=316.85, stdev=40.51, samples=20 00:18:56.004 lat (msec) : 20=2.07%, 50=45.70%, 100=52.17%, 250=0.06% 00:18:56.004 cpu : usr=39.38%, sys=1.15%, ctx=1518, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=80.9%, 16=16.8%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename1: (groupid=0, jobs=1): err= 0: pid=82481: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=320, BW=1283KiB/s (1314kB/s)(12.6MiB/10061msec) 00:18:56.004 slat (usec): min=4, max=4014, avg= 9.17, stdev=70.65 00:18:56.004 clat (usec): min=1165, max=108008, avg=49735.23, stdev=16824.35 00:18:56.004 lat (usec): min=1173, max=108014, avg=49744.40, stdev=16822.49 00:18:56.004 clat percentiles (msec): 00:18:56.004 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 37], 00:18:56.004 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 00:18:56.004 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 72], 00:18:56.004 | 99.00th=[ 85], 99.50th=[ 85], 99.90th=[ 96], 99.95th=[ 105], 00:18:56.004 | 99.99th=[ 109] 00:18:56.004 bw ( KiB/s): min= 1008, max= 2576, per=4.18%, avg=1284.20, stdev=336.76, samples=20 00:18:56.004 iops : min= 252, max= 644, avg=321.05, stdev=84.19, samples=20 00:18:56.004 lat (msec) : 2=0.74%, 4=2.23%, 10=0.99%, 20=2.45%, 50=43.60% 00:18:56.004 lat (msec) : 100=49.92%, 250=0.06% 00:18:56.004 cpu : usr=34.28%, sys=1.11%, ctx=900, majf=0, minf=0 00:18:56.004 IO depths : 1=0.2%, 2=0.9%, 4=3.3%, 8=78.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=88.9%, 8=10.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename1: (groupid=0, jobs=1): err= 0: pid=82482: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=328, BW=1313KiB/s (1345kB/s)(12.8MiB/10006msec) 00:18:56.004 slat (usec): min=3, max=8023, avg=23.10, stdev=282.34 00:18:56.004 clat (usec): min=5100, max=94994, avg=48642.44, stdev=13667.10 00:18:56.004 lat (usec): min=5110, max=95002, avg=48665.54, stdev=13665.29 00:18:56.004 clat percentiles (usec): 00:18:56.004 | 1.00th=[21103], 5.00th=[25035], 10.00th=[32375], 20.00th=[35914], 00:18:56.004 | 30.00th=[39584], 40.00th=[45876], 50.00th=[47973], 60.00th=[52691], 00:18:56.004 | 70.00th=[56886], 80.00th=[60031], 90.00th=[63701], 95.00th=[71828], 00:18:56.004 | 99.00th=[83362], 99.50th=[84411], 99.90th=[90702], 99.95th=[94897], 00:18:56.004 | 99.99th=[94897] 00:18:56.004 bw ( KiB/s): min= 1168, max= 1936, per=4.26%, avg=1308.21, stdev=172.66, samples=19 00:18:56.004 iops : min= 292, max= 484, avg=327.05, stdev=43.17, samples=19 00:18:56.004 lat (msec) : 10=0.18%, 20=0.58%, 50=53.06%, 100=46.18% 00:18:56.004 cpu : usr=35.30%, sys=1.04%, ctx=1137, majf=0, minf=9 00:18:56.004 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.004 issued rwts: total=3285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.004 filename1: (groupid=0, jobs=1): err= 0: pid=82483: Tue Nov 26 20:42:08 2024 00:18:56.004 read: IOPS=330, BW=1322KiB/s (1354kB/s)(12.9MiB/10025msec) 00:18:56.004 slat (usec): min=2, max=4021, avg=15.70, stdev=144.42 00:18:56.004 clat (msec): min=14, max=107, avg=48.33, stdev=14.22 00:18:56.004 lat (msec): min=14, max=107, avg=48.34, stdev=14.22 00:18:56.004 clat percentiles (msec): 00:18:56.004 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 36], 00:18:56.004 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 49], 60.00th=[ 54], 00:18:56.004 | 70.00th=[ 56], 80.00th=[ 60], 90.00th=[ 66], 95.00th=[ 72], 00:18:56.004 | 99.00th=[ 84], 99.50th=[ 86], 99.90th=[ 90], 99.95th=[ 90], 00:18:56.004 | 99.99th=[ 108] 00:18:56.005 bw ( KiB/s): min= 1136, max= 2072, per=4.29%, avg=1318.80, stdev=212.92, samples=20 00:18:56.005 iops : min= 284, max= 518, avg=329.70, stdev=53.23, samples=20 00:18:56.005 lat (msec) : 20=2.57%, 50=49.83%, 100=47.57%, 250=0.03% 00:18:56.005 cpu : usr=44.35%, sys=1.54%, ctx=1339, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename1: (groupid=0, jobs=1): err= 0: pid=82484: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=324, BW=1298KiB/s (1329kB/s)(12.7MiB/10051msec) 00:18:56.005 slat (usec): min=3, max=8018, avg=17.95, stdev=201.69 00:18:56.005 clat (msec): min=2, max=108, avg=49.12, stdev=16.13 00:18:56.005 lat (msec): min=2, max=108, avg=49.14, stdev=16.13 00:18:56.005 clat percentiles (msec): 00:18:56.005 | 1.00th=[ 4], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 38], 00:18:56.005 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 55], 00:18:56.005 | 70.00th=[ 57], 80.00th=[ 62], 90.00th=[ 68], 95.00th=[ 73], 00:18:56.005 | 99.00th=[ 84], 99.50th=[ 86], 99.90th=[ 96], 99.95th=[ 96], 00:18:56.005 | 99.99th=[ 109] 00:18:56.005 bw ( KiB/s): min= 1040, max= 2520, per=4.23%, avg=1300.00, stdev=312.61, samples=20 00:18:56.005 iops : min= 260, max= 630, avg=325.00, stdev=78.15, samples=20 00:18:56.005 lat (msec) : 4=1.59%, 10=1.72%, 20=2.05%, 50=40.90%, 100=53.71% 00:18:56.005 lat (msec) : 250=0.03% 00:18:56.005 cpu : usr=43.67%, sys=1.39%, ctx=1338, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=76.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename1: (groupid=0, jobs=1): err= 0: pid=82485: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=329, BW=1320KiB/s (1351kB/s)(13.0MiB/10051msec) 00:18:56.005 slat (usec): min=2, max=9015, avg=15.27, stdev=181.27 00:18:56.005 clat (usec): min=11996, max=94440, avg=48381.12, stdev=14183.08 00:18:56.005 lat (usec): min=12004, max=94445, avg=48396.38, stdev=14182.08 00:18:56.005 clat percentiles (usec): 00:18:56.005 | 1.00th=[17433], 5.00th=[23987], 10.00th=[31851], 20.00th=[35914], 00:18:56.005 | 30.00th=[40109], 40.00th=[45876], 50.00th=[48497], 60.00th=[52167], 00:18:56.005 | 70.00th=[55837], 80.00th=[59507], 90.00th=[64226], 95.00th=[72877], 00:18:56.005 | 99.00th=[84411], 99.50th=[86508], 99.90th=[93848], 99.95th=[94897], 00:18:56.005 | 99.99th=[94897] 00:18:56.005 bw ( KiB/s): min= 1144, max= 1852, per=4.29%, avg=1319.25, stdev=194.98, samples=20 00:18:56.005 iops : min= 286, max= 463, avg=329.80, stdev=48.74, samples=20 00:18:56.005 lat (msec) : 20=2.32%, 50=50.30%, 100=47.38% 00:18:56.005 cpu : usr=42.83%, sys=1.39%, ctx=1338, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=0.2%, 4=1.1%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82486: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=329, BW=1319KiB/s (1351kB/s)(12.9MiB/10001msec) 00:18:56.005 slat (usec): min=3, max=4016, avg=11.54, stdev=70.10 00:18:56.005 clat (usec): min=739, max=95972, avg=48471.74, stdev=15304.95 00:18:56.005 lat (usec): min=744, max=95978, avg=48483.28, stdev=15304.43 00:18:56.005 clat percentiles (usec): 00:18:56.005 | 1.00th=[ 1319], 5.00th=[23987], 10.00th=[31851], 20.00th=[35914], 00:18:56.005 | 30.00th=[39584], 40.00th=[47449], 50.00th=[47973], 60.00th=[52167], 00:18:56.005 | 70.00th=[58459], 80.00th=[60031], 90.00th=[64226], 95.00th=[71828], 00:18:56.005 | 99.00th=[84411], 99.50th=[87557], 99.90th=[94897], 99.95th=[95945], 00:18:56.005 | 99.99th=[95945] 00:18:56.005 bw ( KiB/s): min= 1112, max= 1904, per=4.17%, avg=1281.26, stdev=177.57, samples=19 00:18:56.005 iops : min= 278, max= 476, avg=320.32, stdev=44.39, samples=19 00:18:56.005 lat (usec) : 750=0.06%, 1000=0.33% 00:18:56.005 lat (msec) : 2=0.97%, 4=0.85%, 10=0.21%, 20=0.45%, 50=52.40% 00:18:56.005 lat (msec) : 100=44.72% 00:18:56.005 cpu : usr=32.01%, sys=0.93%, ctx=881, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82487: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=330, BW=1324KiB/s (1356kB/s)(12.9MiB/10005msec) 00:18:56.005 slat (usec): min=3, max=8019, avg=22.25, stdev=244.37 00:18:56.005 clat (usec): min=5237, max=90311, avg=48253.35, stdev=13600.43 00:18:56.005 lat (usec): min=5243, max=90321, avg=48275.60, stdev=13597.12 00:18:56.005 clat percentiles (usec): 00:18:56.005 | 1.00th=[17433], 5.00th=[26870], 10.00th=[32113], 20.00th=[35914], 00:18:56.005 | 30.00th=[39584], 40.00th=[44827], 50.00th=[47973], 60.00th=[52167], 00:18:56.005 | 70.00th=[55837], 80.00th=[60031], 90.00th=[63701], 95.00th=[71828], 00:18:56.005 | 99.00th=[82314], 99.50th=[84411], 99.90th=[86508], 99.95th=[90702], 00:18:56.005 | 99.99th=[90702] 00:18:56.005 bw ( KiB/s): min= 1176, max= 1992, per=4.29%, avg=1317.58, stdev=181.28, samples=19 00:18:56.005 iops : min= 294, max= 498, avg=329.37, stdev=45.33, samples=19 00:18:56.005 lat (msec) : 10=0.18%, 20=1.00%, 50=53.25%, 100=45.58% 00:18:56.005 cpu : usr=39.44%, sys=1.22%, ctx=1104, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82488: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=315, BW=1261KiB/s (1292kB/s)(12.4MiB/10037msec) 00:18:56.005 slat (usec): min=3, max=8029, avg=21.34, stdev=264.68 00:18:56.005 clat (usec): min=13594, max=96166, avg=50615.93, stdev=13724.57 00:18:56.005 lat (usec): min=13602, max=96174, avg=50637.27, stdev=13729.59 00:18:56.005 clat percentiles (usec): 00:18:56.005 | 1.00th=[18220], 5.00th=[26346], 10.00th=[32900], 20.00th=[38011], 00:18:56.005 | 30.00th=[45876], 40.00th=[47973], 50.00th=[50594], 60.00th=[54789], 00:18:56.005 | 70.00th=[57934], 80.00th=[60031], 90.00th=[67634], 95.00th=[72877], 00:18:56.005 | 99.00th=[83362], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:18:56.005 | 99.99th=[95945] 00:18:56.005 bw ( KiB/s): min= 1144, max= 1664, per=4.10%, avg=1259.60, stdev=129.43, samples=20 00:18:56.005 iops : min= 286, max= 416, avg=314.90, stdev=32.36, samples=20 00:18:56.005 lat (msec) : 20=1.90%, 50=47.05%, 100=51.06% 00:18:56.005 cpu : usr=37.36%, sys=1.04%, ctx=1191, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82489: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=322, BW=1289KiB/s (1320kB/s)(12.6MiB/10049msec) 00:18:56.005 slat (usec): min=3, max=10736, avg=21.90, stdev=310.58 00:18:56.005 clat (msec): min=7, max=110, avg=49.54, stdev=14.88 00:18:56.005 lat (msec): min=7, max=110, avg=49.56, stdev=14.89 00:18:56.005 clat percentiles (msec): 00:18:56.005 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 32], 20.00th=[ 37], 00:18:56.005 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 55], 00:18:56.005 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 67], 95.00th=[ 74], 00:18:56.005 | 99.00th=[ 85], 99.50th=[ 86], 99.90th=[ 95], 99.95th=[ 96], 00:18:56.005 | 99.99th=[ 111] 00:18:56.005 bw ( KiB/s): min= 1096, max= 1765, per=4.19%, avg=1287.70, stdev=185.29, samples=20 00:18:56.005 iops : min= 274, max= 441, avg=321.90, stdev=46.29, samples=20 00:18:56.005 lat (msec) : 10=1.42%, 20=2.25%, 50=44.26%, 100=52.04%, 250=0.03% 00:18:56.005 cpu : usr=40.14%, sys=1.41%, ctx=1325, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=80.9%, 16=16.6%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82490: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=302, BW=1208KiB/s (1237kB/s)(11.8MiB/10016msec) 00:18:56.005 slat (usec): min=3, max=8016, avg=15.01, stdev=205.91 00:18:56.005 clat (msec): min=16, max=119, avg=52.86, stdev=14.44 00:18:56.005 lat (msec): min=16, max=119, avg=52.87, stdev=14.44 00:18:56.005 clat percentiles (msec): 00:18:56.005 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 40], 00:18:56.005 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 59], 00:18:56.005 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 74], 00:18:56.005 | 99.00th=[ 96], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:18:56.005 | 99.99th=[ 121] 00:18:56.005 bw ( KiB/s): min= 896, max= 1648, per=3.92%, avg=1204.00, stdev=140.22, samples=20 00:18:56.005 iops : min= 224, max= 412, avg=301.00, stdev=35.05, samples=20 00:18:56.005 lat (msec) : 20=0.23%, 50=46.89%, 100=52.28%, 250=0.59% 00:18:56.005 cpu : usr=31.68%, sys=1.08%, ctx=835, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=77.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=89.1%, 8=9.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82491: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=333, BW=1335KiB/s (1367kB/s)(13.0MiB/10004msec) 00:18:56.005 slat (usec): min=3, max=7407, avg=15.22, stdev=145.77 00:18:56.005 clat (usec): min=1576, max=93616, avg=47888.62, stdev=13861.60 00:18:56.005 lat (usec): min=1583, max=93626, avg=47903.84, stdev=13860.56 00:18:56.005 clat percentiles (usec): 00:18:56.005 | 1.00th=[22676], 5.00th=[25035], 10.00th=[31851], 20.00th=[35914], 00:18:56.005 | 30.00th=[39060], 40.00th=[44827], 50.00th=[47973], 60.00th=[51643], 00:18:56.005 | 70.00th=[55837], 80.00th=[59507], 90.00th=[64226], 95.00th=[70779], 00:18:56.005 | 99.00th=[83362], 99.50th=[84411], 99.90th=[90702], 99.95th=[93848], 00:18:56.005 | 99.99th=[93848] 00:18:56.005 bw ( KiB/s): min= 1142, max= 1944, per=4.29%, avg=1319.89, stdev=170.16, samples=19 00:18:56.005 iops : min= 285, max= 486, avg=329.95, stdev=42.57, samples=19 00:18:56.005 lat (msec) : 2=0.09%, 4=0.48%, 10=0.21%, 20=0.18%, 50=54.91% 00:18:56.005 lat (msec) : 100=44.13% 00:18:56.005 cpu : usr=42.65%, sys=1.35%, ctx=1211, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82492: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=327, BW=1312KiB/s (1343kB/s)(12.8MiB/10019msec) 00:18:56.005 slat (usec): min=3, max=9039, avg=25.73, stdev=337.01 00:18:56.005 clat (usec): min=13678, max=94912, avg=48626.00, stdev=13513.55 00:18:56.005 lat (usec): min=13685, max=94918, avg=48651.74, stdev=13510.78 00:18:56.005 clat percentiles (usec): 00:18:56.005 | 1.00th=[18220], 5.00th=[26870], 10.00th=[33817], 20.00th=[35914], 00:18:56.005 | 30.00th=[38011], 40.00th=[46924], 50.00th=[47973], 60.00th=[51119], 00:18:56.005 | 70.00th=[57934], 80.00th=[60031], 90.00th=[62653], 95.00th=[71828], 00:18:56.005 | 99.00th=[83362], 99.50th=[84411], 99.90th=[93848], 99.95th=[94897], 00:18:56.005 | 99.99th=[94897] 00:18:56.005 bw ( KiB/s): min= 1152, max= 1972, per=4.26%, avg=1309.50, stdev=173.93, samples=20 00:18:56.005 iops : min= 288, max= 493, avg=327.35, stdev=43.48, samples=20 00:18:56.005 lat (msec) : 20=1.10%, 50=54.76%, 100=44.14% 00:18:56.005 cpu : usr=32.10%, sys=0.94%, ctx=1134, majf=0, minf=9 00:18:56.005 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.005 issued rwts: total=3285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.005 filename2: (groupid=0, jobs=1): err= 0: pid=82493: Tue Nov 26 20:42:08 2024 00:18:56.005 read: IOPS=313, BW=1253KiB/s (1283kB/s)(12.3MiB/10013msec) 00:18:56.005 slat (usec): min=3, max=12028, avg=31.52, stdev=462.00 00:18:56.005 clat (msec): min=13, max=100, avg=50.94, stdev=14.25 00:18:56.005 lat (msec): min=13, max=100, avg=50.97, stdev=14.26 00:18:56.005 clat percentiles (msec): 00:18:56.005 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 34], 20.00th=[ 36], 00:18:56.005 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 56], 00:18:56.005 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 70], 95.00th=[ 74], 00:18:56.005 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 96], 99.95th=[ 96], 00:18:56.005 | 99.99th=[ 102] 00:18:56.005 bw ( KiB/s): min= 960, max= 1648, per=4.06%, avg=1248.40, stdev=135.33, samples=20 00:18:56.005 iops : min= 240, max= 412, avg=312.10, stdev=33.83, samples=20 00:18:56.006 lat (msec) : 20=0.22%, 50=50.05%, 100=49.70%, 250=0.03% 00:18:56.006 cpu : usr=31.93%, sys=0.98%, ctx=884, majf=0, minf=9 00:18:56.006 IO depths : 1=0.1%, 2=1.1%, 4=4.7%, 8=78.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:56.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.006 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.006 issued rwts: total=3137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.006 00:18:56.006 Run status group 0 (all jobs): 00:18:56.006 READ: bw=30.0MiB/s (31.5MB/s), 1208KiB/s-1335KiB/s (1237kB/s-1367kB/s), io=302MiB (317MB), run=10001-10061msec 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 bdev_null0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 [2024-11-26 20:42:09.005679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 bdev_null1 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:56.006 { 00:18:56.006 "params": { 00:18:56.006 "name": "Nvme$subsystem", 00:18:56.006 "trtype": "$TEST_TRANSPORT", 00:18:56.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.006 "adrfam": "ipv4", 00:18:56.006 "trsvcid": "$NVMF_PORT", 00:18:56.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.006 "hdgst": ${hdgst:-false}, 00:18:56.006 "ddgst": ${ddgst:-false} 00:18:56.006 }, 00:18:56.006 "method": "bdev_nvme_attach_controller" 00:18:56.006 } 00:18:56.006 EOF 00:18:56.006 )") 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:56.006 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:56.008 { 00:18:56.008 "params": { 00:18:56.008 "name": "Nvme$subsystem", 00:18:56.008 "trtype": "$TEST_TRANSPORT", 00:18:56.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.008 "adrfam": "ipv4", 00:18:56.008 "trsvcid": "$NVMF_PORT", 00:18:56.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.008 "hdgst": ${hdgst:-false}, 00:18:56.008 "ddgst": ${ddgst:-false} 00:18:56.008 }, 00:18:56.008 "method": "bdev_nvme_attach_controller" 00:18:56.008 } 00:18:56.008 EOF 00:18:56.008 )") 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:56.008 "params": { 00:18:56.008 "name": "Nvme0", 00:18:56.008 "trtype": "tcp", 00:18:56.008 "traddr": "10.0.0.3", 00:18:56.008 "adrfam": "ipv4", 00:18:56.008 "trsvcid": "4420", 00:18:56.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:56.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:56.008 "hdgst": false, 00:18:56.008 "ddgst": false 00:18:56.008 }, 00:18:56.008 "method": "bdev_nvme_attach_controller" 00:18:56.008 },{ 00:18:56.008 "params": { 00:18:56.008 "name": "Nvme1", 00:18:56.008 "trtype": "tcp", 00:18:56.008 "traddr": "10.0.0.3", 00:18:56.008 "adrfam": "ipv4", 00:18:56.008 "trsvcid": "4420", 00:18:56.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.008 "hdgst": false, 00:18:56.008 "ddgst": false 00:18:56.008 }, 00:18:56.008 "method": "bdev_nvme_attach_controller" 00:18:56.008 }' 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:56.008 20:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.008 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:56.008 ... 00:18:56.008 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:56.008 ... 00:18:56.008 fio-3.35 00:18:56.008 Starting 4 threads 00:19:01.264 00:19:01.264 filename0: (groupid=0, jobs=1): err= 0: pid=82640: Tue Nov 26 20:42:14 2024 00:19:01.264 read: IOPS=2974, BW=23.2MiB/s (24.4MB/s)(116MiB/5002msec) 00:19:01.264 slat (nsec): min=5437, max=37049, avg=8827.97, stdev=5343.66 00:19:01.264 clat (usec): min=686, max=5675, avg=2666.21, stdev=768.17 00:19:01.264 lat (usec): min=692, max=5681, avg=2675.04, stdev=768.57 00:19:01.264 clat percentiles (usec): 00:19:01.264 | 1.00th=[ 1139], 5.00th=[ 1532], 10.00th=[ 1582], 20.00th=[ 1827], 00:19:01.264 | 30.00th=[ 2024], 40.00th=[ 2507], 50.00th=[ 2933], 60.00th=[ 3195], 00:19:01.264 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3458], 95.00th=[ 3589], 00:19:01.264 | 99.00th=[ 3884], 99.50th=[ 4015], 99.90th=[ 4490], 99.95th=[ 4883], 00:19:01.264 | 99.99th=[ 5145] 00:19:01.264 bw ( KiB/s): min=19584, max=26240, per=26.56%, avg=23838.22, stdev=2136.53, samples=9 00:19:01.264 iops : min= 2448, max= 3280, avg=2979.78, stdev=267.07, samples=9 00:19:01.264 lat (usec) : 750=0.03%, 1000=0.52% 00:19:01.264 lat (msec) : 2=29.04%, 4=69.87%, 10=0.53% 00:19:01.264 cpu : usr=94.56%, sys=4.88%, ctx=8, majf=0, minf=0 00:19:01.264 IO depths : 1=0.1%, 2=5.1%, 4=61.3%, 8=33.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.264 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.264 issued rwts: total=14878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.264 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.264 filename0: (groupid=0, jobs=1): err= 0: pid=82641: Tue Nov 26 20:42:14 2024 00:19:01.264 read: IOPS=2780, BW=21.7MiB/s (22.8MB/s)(109MiB/5002msec) 00:19:01.264 slat (nsec): min=3063, max=38224, avg=9386.31, stdev=5653.58 00:19:01.264 clat (usec): min=556, max=5331, avg=2848.95, stdev=769.02 00:19:01.264 lat (usec): min=562, max=5337, avg=2858.34, stdev=769.10 00:19:01.265 clat percentiles (usec): 00:19:01.265 | 1.00th=[ 1221], 5.00th=[ 1565], 10.00th=[ 1614], 20.00th=[ 1909], 00:19:01.265 | 30.00th=[ 2343], 40.00th=[ 2966], 50.00th=[ 3228], 60.00th=[ 3294], 00:19:01.265 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3556], 95.00th=[ 3720], 00:19:01.265 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 4752], 99.95th=[ 5080], 00:19:01.265 | 99.99th=[ 5276] 00:19:01.265 bw ( KiB/s): min=18944, max=25920, per=24.95%, avg=22395.33, stdev=2719.29, samples=9 00:19:01.265 iops : min= 2368, max= 3240, avg=2799.33, stdev=340.02, samples=9 00:19:01.265 lat (usec) : 750=0.12%, 1000=0.40% 00:19:01.265 lat (msec) : 2=22.07%, 4=75.01%, 10=2.40% 00:19:01.265 cpu : usr=94.86%, sys=4.62%, ctx=13, majf=0, minf=1 00:19:01.265 IO depths : 1=0.1%, 2=9.8%, 4=58.7%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.265 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.265 issued rwts: total=13908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.265 filename1: (groupid=0, jobs=1): err= 0: pid=82642: Tue Nov 26 20:42:14 2024 00:19:01.265 read: IOPS=2784, BW=21.8MiB/s (22.8MB/s)(109MiB/5001msec) 00:19:01.265 slat (nsec): min=3924, max=43559, avg=9384.41, stdev=5677.25 00:19:01.265 clat (usec): min=497, max=5209, avg=2845.06, stdev=776.42 00:19:01.265 lat (usec): min=502, max=5216, avg=2854.45, stdev=776.54 00:19:01.265 clat percentiles (usec): 00:19:01.265 | 1.00th=[ 979], 5.00th=[ 1467], 10.00th=[ 1582], 20.00th=[ 1942], 00:19:01.265 | 30.00th=[ 2474], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3294], 00:19:01.265 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3589], 95.00th=[ 3687], 00:19:01.265 | 99.00th=[ 4178], 99.50th=[ 4490], 99.90th=[ 4555], 99.95th=[ 4621], 00:19:01.265 | 99.99th=[ 5080] 00:19:01.265 bw ( KiB/s): min=18688, max=26000, per=24.63%, avg=22108.44, stdev=2674.32, samples=9 00:19:01.265 iops : min= 2336, max= 3250, avg=2763.56, stdev=334.29, samples=9 00:19:01.265 lat (usec) : 500=0.01%, 750=0.13%, 1000=1.07% 00:19:01.265 lat (msec) : 2=20.28%, 4=76.79%, 10=1.72% 00:19:01.265 cpu : usr=94.18%, sys=5.26%, ctx=1035, majf=0, minf=0 00:19:01.265 IO depths : 1=0.1%, 2=10.0%, 4=58.7%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.265 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.265 issued rwts: total=13924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.265 filename1: (groupid=0, jobs=1): err= 0: pid=82643: Tue Nov 26 20:42:14 2024 00:19:01.265 read: IOPS=2682, BW=21.0MiB/s (22.0MB/s)(105MiB/5001msec) 00:19:01.265 slat (nsec): min=3904, max=39224, avg=10035.70, stdev=6068.35 00:19:01.265 clat (usec): min=770, max=6394, avg=2951.81, stdev=714.67 00:19:01.265 lat (usec): min=777, max=6406, avg=2961.84, stdev=714.74 00:19:01.265 clat percentiles (usec): 00:19:01.265 | 1.00th=[ 1156], 5.00th=[ 1516], 10.00th=[ 1778], 20.00th=[ 2180], 00:19:01.265 | 30.00th=[ 2802], 40.00th=[ 3130], 50.00th=[ 3261], 60.00th=[ 3326], 00:19:01.265 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3556], 95.00th=[ 3720], 00:19:01.265 | 99.00th=[ 4146], 99.50th=[ 4424], 99.90th=[ 4555], 99.95th=[ 4752], 00:19:01.265 | 99.99th=[ 5342] 00:19:01.265 bw ( KiB/s): min=18688, max=25648, per=23.76%, avg=21325.89, stdev=2183.52, samples=9 00:19:01.265 iops : min= 2336, max= 3206, avg=2665.67, stdev=272.89, samples=9 00:19:01.265 lat (usec) : 1000=0.42% 00:19:01.265 lat (msec) : 2=15.81%, 4=81.97%, 10=1.80% 00:19:01.265 cpu : usr=94.22%, sys=5.28%, ctx=6, majf=0, minf=0 00:19:01.265 IO depths : 1=0.1%, 2=12.5%, 4=57.2%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.265 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.265 issued rwts: total=13413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.265 00:19:01.265 Run status group 0 (all jobs): 00:19:01.265 READ: bw=87.7MiB/s (91.9MB/s), 21.0MiB/s-23.2MiB/s (22.0MB/s-24.4MB/s), io=438MiB (460MB), run=5001-5002msec 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 ************************************ 00:19:01.265 END TEST fio_dif_rand_params 00:19:01.265 ************************************ 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 00:19:01.265 real 0m22.859s 00:19:01.265 user 2m6.134s 00:19:01.265 sys 0m5.428s 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 20:42:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:01.265 20:42:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:01.265 20:42:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 ************************************ 00:19:01.265 START TEST fio_dif_digest 00:19:01.265 ************************************ 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 bdev_null0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.265 [2024-11-26 20:42:14.965924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:01.265 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.266 { 00:19:01.266 "params": { 00:19:01.266 "name": "Nvme$subsystem", 00:19:01.266 "trtype": "$TEST_TRANSPORT", 00:19:01.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.266 "adrfam": "ipv4", 00:19:01.266 "trsvcid": "$NVMF_PORT", 00:19:01.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.266 "hdgst": ${hdgst:-false}, 00:19:01.266 "ddgst": ${ddgst:-false} 00:19:01.266 }, 00:19:01.266 "method": "bdev_nvme_attach_controller" 00:19:01.266 } 00:19:01.266 EOF 00:19:01.266 )") 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:01.266 "params": { 00:19:01.266 "name": "Nvme0", 00:19:01.266 "trtype": "tcp", 00:19:01.266 "traddr": "10.0.0.3", 00:19:01.266 "adrfam": "ipv4", 00:19:01.266 "trsvcid": "4420", 00:19:01.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:01.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:01.266 "hdgst": true, 00:19:01.266 "ddgst": true 00:19:01.266 }, 00:19:01.266 "method": "bdev_nvme_attach_controller" 00:19:01.266 }' 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:01.266 20:42:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:01.266 20:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:01.266 20:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:01.266 20:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:01.266 20:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.266 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:01.266 ... 00:19:01.266 fio-3.35 00:19:01.266 Starting 3 threads 00:19:11.225 00:19:11.225 filename0: (groupid=0, jobs=1): err= 0: pid=82751: Tue Nov 26 20:42:25 2024 00:19:11.225 read: IOPS=302, BW=37.9MiB/s (39.7MB/s)(379MiB/10005msec) 00:19:11.225 slat (nsec): min=5803, max=50257, avg=8064.77, stdev=2336.97 00:19:11.225 clat (usec): min=5322, max=10313, avg=9886.71, stdev=197.46 00:19:11.225 lat (usec): min=5334, max=10322, avg=9894.77, stdev=197.24 00:19:11.225 clat percentiles (usec): 00:19:11.225 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9765], 00:19:11.225 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[10028], 00:19:11.225 | 70.00th=[10028], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:19:11.225 | 99.00th=[10159], 99.50th=[10290], 99.90th=[10290], 99.95th=[10290], 00:19:11.225 | 99.99th=[10290] 00:19:11.225 bw ( KiB/s): min=38400, max=39168, per=33.37%, avg=38800.05, stdev=390.32, samples=19 00:19:11.225 iops : min= 300, max= 306, avg=303.11, stdev= 3.03, samples=19 00:19:11.225 lat (msec) : 10=84.95%, 20=15.05% 00:19:11.225 cpu : usr=92.82%, sys=6.70%, ctx=14, majf=0, minf=0 00:19:11.225 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.225 issued rwts: total=3030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:11.225 filename0: (groupid=0, jobs=1): err= 0: pid=82752: Tue Nov 26 20:42:25 2024 00:19:11.225 read: IOPS=302, BW=37.8MiB/s (39.7MB/s)(379MiB/10007msec) 00:19:11.225 slat (nsec): min=4068, max=38677, avg=10821.18, stdev=6400.84 00:19:11.225 clat (usec): min=6623, max=10545, avg=9883.41, stdev=176.84 00:19:11.225 lat (usec): min=6629, max=10557, avg=9894.24, stdev=176.87 00:19:11.225 clat percentiles (usec): 00:19:11.225 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9765], 00:19:11.225 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[10028], 00:19:11.225 | 70.00th=[10028], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:19:11.225 | 99.00th=[10159], 99.50th=[10290], 99.90th=[10290], 99.95th=[10552], 00:19:11.225 | 99.99th=[10552] 00:19:11.225 bw ( KiB/s): min=38400, max=39168, per=33.34%, avg=38763.79, stdev=393.98, samples=19 00:19:11.225 iops : min= 300, max= 306, avg=302.84, stdev= 3.08, samples=19 00:19:11.225 lat (msec) : 10=84.16%, 20=15.84% 00:19:11.225 cpu : usr=92.87%, sys=6.72%, ctx=15, majf=0, minf=0 00:19:11.225 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.225 issued rwts: total=3030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:11.225 filename0: (groupid=0, jobs=1): err= 0: pid=82753: Tue Nov 26 20:42:25 2024 00:19:11.225 read: IOPS=302, BW=37.8MiB/s (39.7MB/s)(379MiB/10007msec) 00:19:11.225 slat (nsec): min=4036, max=38477, avg=10829.18, stdev=6411.83 00:19:11.225 clat (usec): min=6622, max=10678, avg=9883.19, stdev=177.22 00:19:11.225 lat (usec): min=6629, max=10690, avg=9894.02, stdev=177.29 00:19:11.225 clat percentiles (usec): 00:19:11.225 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9765], 00:19:11.225 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[10028], 00:19:11.225 | 70.00th=[10028], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:19:11.225 | 99.00th=[10159], 99.50th=[10159], 99.90th=[10290], 99.95th=[10683], 00:19:11.225 | 99.99th=[10683] 00:19:11.225 bw ( KiB/s): min=38400, max=39168, per=33.34%, avg=38763.79, stdev=393.98, samples=19 00:19:11.225 iops : min= 300, max= 306, avg=302.84, stdev= 3.08, samples=19 00:19:11.225 lat (msec) : 10=84.79%, 20=15.21% 00:19:11.225 cpu : usr=93.07%, sys=6.52%, ctx=10, majf=0, minf=0 00:19:11.225 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.225 issued rwts: total=3030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:11.225 00:19:11.225 Run status group 0 (all jobs): 00:19:11.225 READ: bw=114MiB/s (119MB/s), 37.8MiB/s-37.9MiB/s (39.7MB/s-39.7MB/s), io=1136MiB (1191MB), run=10005-10007msec 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.225 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:11.226 ************************************ 00:19:11.226 END TEST fio_dif_digest 00:19:11.226 ************************************ 00:19:11.226 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.226 00:19:11.226 real 0m10.806s 00:19:11.226 user 0m28.414s 00:19:11.226 sys 0m2.160s 00:19:11.226 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.226 20:42:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:11.226 20:42:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:11.226 20:42:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:11.226 20:42:25 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.226 20:42:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.483 rmmod nvme_tcp 00:19:11.483 rmmod nvme_fabrics 00:19:11.483 rmmod nvme_keyring 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 81983 ']' 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 81983 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 81983 ']' 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 81983 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81983 00:19:11.483 killing process with pid 81983 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81983' 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@973 -- # kill 81983 00:19:11.483 20:42:25 nvmf_dif -- common/autotest_common.sh@978 -- # wait 81983 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:11.483 20:42:25 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:11.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:11.740 Waiting for block devices as requested 00:19:11.740 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:11.997 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:11.997 20:42:26 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:12.294 20:42:26 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.294 20:42:26 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.294 20:42:26 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:12.294 20:42:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.294 20:42:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:12.294 20:42:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.294 20:42:26 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:19:12.294 ************************************ 00:19:12.294 END TEST nvmf_dif 00:19:12.294 ************************************ 00:19:12.294 00:19:12.294 real 0m58.079s 00:19:12.294 user 3m51.699s 00:19:12.294 sys 0m13.904s 00:19:12.294 20:42:26 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.294 20:42:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:12.294 20:42:26 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:12.294 20:42:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.294 20:42:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.294 20:42:26 -- common/autotest_common.sh@10 -- # set +x 00:19:12.294 ************************************ 00:19:12.294 START TEST nvmf_abort_qd_sizes 00:19:12.294 ************************************ 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:12.294 * Looking for test storage... 00:19:12.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.294 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.295 --rc genhtml_branch_coverage=1 00:19:12.295 --rc genhtml_function_coverage=1 00:19:12.295 --rc genhtml_legend=1 00:19:12.295 --rc geninfo_all_blocks=1 00:19:12.295 --rc geninfo_unexecuted_blocks=1 00:19:12.295 00:19:12.295 ' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.295 --rc genhtml_branch_coverage=1 00:19:12.295 --rc genhtml_function_coverage=1 00:19:12.295 --rc genhtml_legend=1 00:19:12.295 --rc geninfo_all_blocks=1 00:19:12.295 --rc geninfo_unexecuted_blocks=1 00:19:12.295 00:19:12.295 ' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.295 --rc genhtml_branch_coverage=1 00:19:12.295 --rc genhtml_function_coverage=1 00:19:12.295 --rc genhtml_legend=1 00:19:12.295 --rc geninfo_all_blocks=1 00:19:12.295 --rc geninfo_unexecuted_blocks=1 00:19:12.295 00:19:12.295 ' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.295 --rc genhtml_branch_coverage=1 00:19:12.295 --rc genhtml_function_coverage=1 00:19:12.295 --rc genhtml_legend=1 00:19:12.295 --rc geninfo_all_blocks=1 00:19:12.295 --rc geninfo_unexecuted_blocks=1 00:19:12.295 00:19:12.295 ' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.295 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:12.295 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:12.553 Cannot find device "nvmf_init_br" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:12.553 Cannot find device "nvmf_init_br2" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:12.553 Cannot find device "nvmf_tgt_br" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.553 Cannot find device "nvmf_tgt_br2" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:12.553 Cannot find device "nvmf_init_br" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:12.553 Cannot find device "nvmf_init_br2" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:12.553 Cannot find device "nvmf_tgt_br" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:12.553 Cannot find device "nvmf_tgt_br2" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:12.553 Cannot find device "nvmf_br" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:12.553 Cannot find device "nvmf_init_if" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:12.553 Cannot find device "nvmf_init_if2" 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:12.553 20:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:12.553 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:12.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:12.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:19:12.810 00:19:12.810 --- 10.0.0.3 ping statistics --- 00:19:12.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.810 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:12.810 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:12.810 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:19:12.810 00:19:12.810 --- 10.0.0.4 ping statistics --- 00:19:12.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.810 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:12.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:19:12.810 00:19:12.810 --- 10.0.0.1 ping statistics --- 00:19:12.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.810 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:12.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:12.810 00:19:12.810 --- 10.0.0.2 ping statistics --- 00:19:12.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.810 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:12.810 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:13.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.323 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:13.323 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:13.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=83396 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 83396 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 83396 ']' 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:13.323 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:13.323 [2024-11-26 20:42:27.860337] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:19:13.323 [2024-11-26 20:42:27.860398] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.581 [2024-11-26 20:42:28.000338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.581 [2024-11-26 20:42:28.037507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.581 [2024-11-26 20:42:28.037723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.581 [2024-11-26 20:42:28.037795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.581 [2024-11-26 20:42:28.037823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.581 [2024-11-26 20:42:28.037838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.581 [2024-11-26 20:42:28.038554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.581 [2024-11-26 20:42:28.038634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.581 [2024-11-26 20:42:28.039265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.581 [2024-11-26 20:42:28.039271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.581 [2024-11-26 20:42:28.070467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:19:14.512 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.513 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:14.513 ************************************ 00:19:14.513 START TEST spdk_target_abort 00:19:14.513 ************************************ 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:14.513 spdk_targetn1 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:14.513 [2024-11-26 20:42:28.873533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:14.513 [2024-11-26 20:42:28.908916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:14.513 20:42:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:17.797 Initializing NVMe Controllers 00:19:17.797 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:17.797 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:17.797 Initialization complete. Launching workers. 00:19:17.797 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16237, failed: 0 00:19:17.797 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1010, failed to submit 15227 00:19:17.797 success 847, unsuccessful 163, failed 0 00:19:17.797 20:42:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:17.797 20:42:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:21.077 Initializing NVMe Controllers 00:19:21.077 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:21.077 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:21.077 Initialization complete. Launching workers. 00:19:21.077 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8882, failed: 0 00:19:21.077 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1158, failed to submit 7724 00:19:21.077 success 387, unsuccessful 771, failed 0 00:19:21.077 20:42:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:21.077 20:42:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:24.357 Initializing NVMe Controllers 00:19:24.357 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:24.357 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:24.357 Initialization complete. Launching workers. 00:19:24.357 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37381, failed: 0 00:19:24.357 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2318, failed to submit 35063 00:19:24.357 success 538, unsuccessful 1780, failed 0 00:19:24.357 20:42:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:19:24.357 20:42:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.357 20:42:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:24.357 20:42:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.357 20:42:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:24.357 20:42:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.357 20:42:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83396 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 83396 ']' 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 83396 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83396 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.256 killing process with pid 83396 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83396' 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 83396 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 83396 00:19:26.256 00:19:26.256 real 0m11.980s 00:19:26.256 user 0m47.096s 00:19:26.256 sys 0m1.888s 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.256 20:42:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:26.256 ************************************ 00:19:26.256 END TEST spdk_target_abort 00:19:26.256 ************************************ 00:19:26.514 20:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:19:26.514 20:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:26.514 20:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.514 20:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:26.514 ************************************ 00:19:26.514 START TEST kernel_target_abort 00:19:26.514 ************************************ 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:26.514 20:42:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:26.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:26.772 Waiting for block devices as requested 00:19:26.772 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.772 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:26.772 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:27.030 No valid GPT data, bailing 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:27.030 No valid GPT data, bailing 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:27.030 No valid GPT data, bailing 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:27.030 No valid GPT data, bailing 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:27.030 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 --hostid=38d6bd30-54c5-4858-a242-ab15764fb2d9 -a 10.0.0.1 -t tcp -s 4420 00:19:27.030 00:19:27.030 Discovery Log Number of Records 2, Generation counter 2 00:19:27.030 =====Discovery Log Entry 0====== 00:19:27.030 trtype: tcp 00:19:27.030 adrfam: ipv4 00:19:27.030 subtype: current discovery subsystem 00:19:27.030 treq: not specified, sq flow control disable supported 00:19:27.030 portid: 1 00:19:27.030 trsvcid: 4420 00:19:27.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:27.031 traddr: 10.0.0.1 00:19:27.031 eflags: none 00:19:27.031 sectype: none 00:19:27.031 =====Discovery Log Entry 1====== 00:19:27.031 trtype: tcp 00:19:27.031 adrfam: ipv4 00:19:27.031 subtype: nvme subsystem 00:19:27.031 treq: not specified, sq flow control disable supported 00:19:27.031 portid: 1 00:19:27.031 trsvcid: 4420 00:19:27.031 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:27.031 traddr: 10.0.0.1 00:19:27.031 eflags: none 00:19:27.031 sectype: none 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:27.031 20:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:30.389 Initializing NVMe Controllers 00:19:30.389 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:30.389 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:30.389 Initialization complete. Launching workers. 00:19:30.389 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52811, failed: 0 00:19:30.389 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 52811, failed to submit 0 00:19:30.389 success 0, unsuccessful 52811, failed 0 00:19:30.389 20:42:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:30.389 20:42:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:33.670 Initializing NVMe Controllers 00:19:33.670 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:33.670 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:33.670 Initialization complete. Launching workers. 00:19:33.670 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84117, failed: 0 00:19:33.670 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36044, failed to submit 48073 00:19:33.670 success 0, unsuccessful 36044, failed 0 00:19:33.670 20:42:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:33.670 20:42:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:36.979 Initializing NVMe Controllers 00:19:36.979 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:36.979 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:36.979 Initialization complete. Launching workers. 00:19:36.979 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103392, failed: 0 00:19:36.979 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25850, failed to submit 77542 00:19:36.979 success 0, unsuccessful 25850, failed 0 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:36.979 20:42:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:37.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:45.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:45.422 00:19:45.422 real 0m18.864s 00:19:45.422 user 0m7.221s 00:19:45.422 sys 0m9.477s 00:19:45.422 20:42:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.422 20:42:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:45.422 ************************************ 00:19:45.422 END TEST kernel_target_abort 00:19:45.422 ************************************ 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:45.422 rmmod nvme_tcp 00:19:45.422 rmmod nvme_fabrics 00:19:45.422 rmmod nvme_keyring 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 83396 ']' 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 83396 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 83396 ']' 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 83396 00:19:45.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83396) - No such process 00:19:45.422 Process with pid 83396 is not found 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 83396 is not found' 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:45.422 20:42:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:45.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.679 Waiting for block devices as requested 00:19:45.937 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:45.937 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:45.937 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:19:46.194 00:19:46.194 real 0m33.960s 00:19:46.194 user 0m55.329s 00:19:46.194 sys 0m12.466s 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.194 20:43:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:46.194 ************************************ 00:19:46.194 END TEST nvmf_abort_qd_sizes 00:19:46.194 ************************************ 00:19:46.194 20:43:00 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:46.194 20:43:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:46.194 20:43:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.194 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:19:46.194 ************************************ 00:19:46.194 START TEST keyring_file 00:19:46.194 ************************************ 00:19:46.194 20:43:00 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:46.194 * Looking for test storage... 00:19:46.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:19:46.194 20:43:00 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:46.194 20:43:00 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:19:46.194 20:43:00 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:46.452 20:43:00 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@345 -- # : 1 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@353 -- # local d=1 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@355 -- # echo 1 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@353 -- # local d=2 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@355 -- # echo 2 00:19:46.452 20:43:00 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:19:46.453 20:43:00 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:46.453 20:43:00 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:46.453 20:43:00 keyring_file -- scripts/common.sh@368 -- # return 0 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.453 --rc genhtml_branch_coverage=1 00:19:46.453 --rc genhtml_function_coverage=1 00:19:46.453 --rc genhtml_legend=1 00:19:46.453 --rc geninfo_all_blocks=1 00:19:46.453 --rc geninfo_unexecuted_blocks=1 00:19:46.453 00:19:46.453 ' 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.453 --rc genhtml_branch_coverage=1 00:19:46.453 --rc genhtml_function_coverage=1 00:19:46.453 --rc genhtml_legend=1 00:19:46.453 --rc geninfo_all_blocks=1 00:19:46.453 --rc geninfo_unexecuted_blocks=1 00:19:46.453 00:19:46.453 ' 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.453 --rc genhtml_branch_coverage=1 00:19:46.453 --rc genhtml_function_coverage=1 00:19:46.453 --rc genhtml_legend=1 00:19:46.453 --rc geninfo_all_blocks=1 00:19:46.453 --rc geninfo_unexecuted_blocks=1 00:19:46.453 00:19:46.453 ' 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.453 --rc genhtml_branch_coverage=1 00:19:46.453 --rc genhtml_function_coverage=1 00:19:46.453 --rc genhtml_legend=1 00:19:46.453 --rc geninfo_all_blocks=1 00:19:46.453 --rc geninfo_unexecuted_blocks=1 00:19:46.453 00:19:46.453 ' 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.453 20:43:00 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:19:46.453 20:43:00 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.453 20:43:00 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.453 20:43:00 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.453 20:43:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.453 20:43:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.453 20:43:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.453 20:43:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:19:46.453 20:43:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@51 -- # : 0 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:46.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Q5HjZOTjFg 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Q5HjZOTjFg 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Q5HjZOTjFg 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Q5HjZOTjFg 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Pqa8Kl59Zy 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:46.453 20:43:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Pqa8Kl59Zy 00:19:46.453 20:43:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Pqa8Kl59Zy 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Pqa8Kl59Zy 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=84309 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84309 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84309 ']' 00:19:46.453 20:43:00 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.453 20:43:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:46.453 [2024-11-26 20:43:00.955319] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:19:46.453 [2024-11-26 20:43:00.955386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84309 ] 00:19:46.712 [2024-11-26 20:43:01.095160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.712 [2024-11-26 20:43:01.131409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.712 [2024-11-26 20:43:01.175353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:47.279 20:43:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.279 20:43:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:47.279 20:43:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:19:47.279 20:43:01 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.279 20:43:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:47.279 [2024-11-26 20:43:01.825573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.537 null0 00:19:47.537 [2024-11-26 20:43:01.857536] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.537 [2024-11-26 20:43:01.857676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.537 20:43:01 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.537 20:43:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:47.537 [2024-11-26 20:43:01.885535] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:19:47.537 request: 00:19:47.537 { 00:19:47.537 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:19:47.538 "secure_channel": false, 00:19:47.538 "listen_address": { 00:19:47.538 "trtype": "tcp", 00:19:47.538 "traddr": "127.0.0.1", 00:19:47.538 "trsvcid": "4420" 00:19:47.538 }, 00:19:47.538 "method": "nvmf_subsystem_add_listener", 00:19:47.538 "req_id": 1 00:19:47.538 } 00:19:47.538 Got JSON-RPC error response 00:19:47.538 response: 00:19:47.538 { 00:19:47.538 "code": -32602, 00:19:47.538 "message": "Invalid parameters" 00:19:47.538 } 00:19:47.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:47.538 20:43:01 keyring_file -- keyring/file.sh@47 -- # bperfpid=84325 00:19:47.538 20:43:01 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84325 /var/tmp/bperf.sock 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84325 ']' 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.538 20:43:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:47.538 20:43:01 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:19:47.538 [2024-11-26 20:43:01.929578] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:19:47.538 [2024-11-26 20:43:01.929632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84325 ] 00:19:47.538 [2024-11-26 20:43:02.068936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.795 [2024-11-26 20:43:02.105711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.796 [2024-11-26 20:43:02.138342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.361 20:43:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.361 20:43:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:48.361 20:43:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:48.361 20:43:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:48.619 20:43:02 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Pqa8Kl59Zy 00:19:48.619 20:43:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Pqa8Kl59Zy 00:19:48.877 20:43:03 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:19:48.877 20:43:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:19:48.877 20:43:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:48.877 20:43:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:48.877 20:43:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:48.877 20:43:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Q5HjZOTjFg == \/\t\m\p\/\t\m\p\.\Q\5\H\j\Z\O\T\j\F\g ]] 00:19:48.877 20:43:03 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:19:48.877 20:43:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:48.877 20:43:03 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:19:48.877 20:43:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:48.877 20:43:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:49.135 20:43:03 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Pqa8Kl59Zy == \/\t\m\p\/\t\m\p\.\P\q\a\8\K\l\5\9\Z\y ]] 00:19:49.136 20:43:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:19:49.136 20:43:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:49.136 20:43:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:49.136 20:43:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:49.136 20:43:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:49.136 20:43:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:49.393 20:43:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:19:49.393 20:43:03 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:19:49.393 20:43:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:49.393 20:43:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:49.393 20:43:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:49.393 20:43:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:49.393 20:43:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:49.651 20:43:04 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:19:49.651 20:43:04 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:49.651 20:43:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:49.651 [2024-11-26 20:43:04.191177] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.908 nvme0n1 00:19:49.908 20:43:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:19:49.908 20:43:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:49.909 20:43:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:49.909 20:43:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:49.909 20:43:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:49.909 20:43:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:50.166 20:43:04 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:19:50.166 20:43:04 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:19:50.166 20:43:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:50.166 20:43:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:50.166 20:43:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:50.166 20:43:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:50.166 20:43:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:50.166 20:43:04 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:19:50.166 20:43:04 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:50.424 Running I/O for 1 seconds... 00:19:51.358 19781.00 IOPS, 77.27 MiB/s 00:19:51.358 Latency(us) 00:19:51.358 [2024-11-26T20:43:05.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.358 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:19:51.358 nvme0n1 : 1.00 19832.33 77.47 0.00 0.00 6442.61 2583.63 16232.76 00:19:51.358 [2024-11-26T20:43:05.913Z] =================================================================================================================== 00:19:51.358 [2024-11-26T20:43:05.913Z] Total : 19832.33 77.47 0.00 0.00 6442.61 2583.63 16232.76 00:19:51.358 { 00:19:51.358 "results": [ 00:19:51.358 { 00:19:51.358 "job": "nvme0n1", 00:19:51.358 "core_mask": "0x2", 00:19:51.358 "workload": "randrw", 00:19:51.358 "percentage": 50, 00:19:51.358 "status": "finished", 00:19:51.358 "queue_depth": 128, 00:19:51.358 "io_size": 4096, 00:19:51.358 "runtime": 1.003967, 00:19:51.358 "iops": 19832.325166066214, 00:19:51.358 "mibps": 77.47002017994615, 00:19:51.358 "io_failed": 0, 00:19:51.358 "io_timeout": 0, 00:19:51.358 "avg_latency_us": 6442.611776250468, 00:19:51.358 "min_latency_us": 2583.630769230769, 00:19:51.358 "max_latency_us": 16232.763076923076 00:19:51.358 } 00:19:51.358 ], 00:19:51.358 "core_count": 1 00:19:51.358 } 00:19:51.358 20:43:05 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:51.358 20:43:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:51.616 20:43:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:19:51.616 20:43:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:51.616 20:43:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:51.616 20:43:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:51.616 20:43:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:51.616 20:43:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:51.874 20:43:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:19:51.874 20:43:06 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:19:51.874 20:43:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:51.874 20:43:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:51.874 20:43:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:51.874 20:43:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:51.874 20:43:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:52.131 20:43:06 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:19:52.131 20:43:06 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:52.131 20:43:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:52.131 20:43:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:52.131 20:43:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:52.131 20:43:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.131 20:43:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:52.131 20:43:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.131 20:43:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:52.131 20:43:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:52.131 [2024-11-26 20:43:06.640231] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.131 [2024-11-26 20:43:06.640993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b65d0 (107): Transport endpoint is not connected 00:19:52.131 [2024-11-26 20:43:06.641986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b65d0 (9): Bad file descriptor 00:19:52.132 [2024-11-26 20:43:06.642985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:19:52.132 [2024-11-26 20:43:06.643001] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:19:52.132 [2024-11-26 20:43:06.643006] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:19:52.132 [2024-11-26 20:43:06.643011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:19:52.132 request: 00:19:52.132 { 00:19:52.132 "name": "nvme0", 00:19:52.132 "trtype": "tcp", 00:19:52.132 "traddr": "127.0.0.1", 00:19:52.132 "adrfam": "ipv4", 00:19:52.132 "trsvcid": "4420", 00:19:52.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:52.132 "prchk_reftag": false, 00:19:52.132 "prchk_guard": false, 00:19:52.132 "hdgst": false, 00:19:52.132 "ddgst": false, 00:19:52.132 "psk": "key1", 00:19:52.132 "allow_unrecognized_csi": false, 00:19:52.132 "method": "bdev_nvme_attach_controller", 00:19:52.132 "req_id": 1 00:19:52.132 } 00:19:52.132 Got JSON-RPC error response 00:19:52.132 response: 00:19:52.132 { 00:19:52.132 "code": -5, 00:19:52.132 "message": "Input/output error" 00:19:52.132 } 00:19:52.132 20:43:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:52.132 20:43:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.132 20:43:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.132 20:43:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.132 20:43:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:19:52.132 20:43:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:52.132 20:43:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:52.132 20:43:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:52.132 20:43:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:52.132 20:43:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:52.389 20:43:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:19:52.389 20:43:06 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:19:52.389 20:43:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:52.389 20:43:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:52.389 20:43:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:52.389 20:43:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:52.389 20:43:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:52.647 20:43:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:19:52.647 20:43:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:19:52.647 20:43:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:52.905 20:43:07 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:19:52.905 20:43:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:19:53.162 20:43:07 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:19:53.162 20:43:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:53.162 20:43:07 keyring_file -- keyring/file.sh@78 -- # jq length 00:19:53.162 20:43:07 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:19:53.162 20:43:07 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Q5HjZOTjFg 00:19:53.162 20:43:07 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:53.162 20:43:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:53.162 20:43:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:53.162 20:43:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:53.162 20:43:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.162 20:43:07 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:53.162 20:43:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.162 20:43:07 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:53.162 20:43:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:53.418 [2024-11-26 20:43:07.885517] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q5HjZOTjFg': 0100660 00:19:53.418 [2024-11-26 20:43:07.885551] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:53.418 request: 00:19:53.418 { 00:19:53.418 "name": "key0", 00:19:53.418 "path": "/tmp/tmp.Q5HjZOTjFg", 00:19:53.418 "method": "keyring_file_add_key", 00:19:53.418 "req_id": 1 00:19:53.418 } 00:19:53.418 Got JSON-RPC error response 00:19:53.418 response: 00:19:53.418 { 00:19:53.418 "code": -1, 00:19:53.418 "message": "Operation not permitted" 00:19:53.418 } 00:19:53.418 20:43:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:53.418 20:43:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.418 20:43:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.418 20:43:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.418 20:43:07 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Q5HjZOTjFg 00:19:53.418 20:43:07 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:53.418 20:43:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q5HjZOTjFg 00:19:53.676 20:43:08 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Q5HjZOTjFg 00:19:53.676 20:43:08 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:19:53.676 20:43:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:53.676 20:43:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:53.676 20:43:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:53.676 20:43:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:53.676 20:43:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:53.933 20:43:08 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:19:53.933 20:43:08 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:53.933 20:43:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:53.933 20:43:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:53.933 20:43:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:53.933 20:43:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.933 20:43:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:53.933 20:43:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.933 20:43:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:53.933 20:43:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:54.190 [2024-11-26 20:43:08.513635] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Q5HjZOTjFg': No such file or directory 00:19:54.191 [2024-11-26 20:43:08.513666] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:19:54.191 [2024-11-26 20:43:08.513679] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:19:54.191 [2024-11-26 20:43:08.513684] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:19:54.191 [2024-11-26 20:43:08.513688] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:54.191 [2024-11-26 20:43:08.513692] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:19:54.191 request: 00:19:54.191 { 00:19:54.191 "name": "nvme0", 00:19:54.191 "trtype": "tcp", 00:19:54.191 "traddr": "127.0.0.1", 00:19:54.191 "adrfam": "ipv4", 00:19:54.191 "trsvcid": "4420", 00:19:54.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:54.191 "prchk_reftag": false, 00:19:54.191 "prchk_guard": false, 00:19:54.191 "hdgst": false, 00:19:54.191 "ddgst": false, 00:19:54.191 "psk": "key0", 00:19:54.191 "allow_unrecognized_csi": false, 00:19:54.191 "method": "bdev_nvme_attach_controller", 00:19:54.191 "req_id": 1 00:19:54.191 } 00:19:54.191 Got JSON-RPC error response 00:19:54.191 response: 00:19:54.191 { 00:19:54.191 "code": -19, 00:19:54.191 "message": "No such device" 00:19:54.191 } 00:19:54.191 20:43:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:54.191 20:43:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:54.191 20:43:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:54.191 20:43:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:54.191 20:43:08 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:54.191 20:43:08 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9I2xUpEQbR 00:19:54.191 20:43:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:54.191 20:43:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:54.191 20:43:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:54.191 20:43:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:54.191 20:43:08 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:54.191 20:43:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:54.191 20:43:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:54.448 20:43:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9I2xUpEQbR 00:19:54.448 20:43:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9I2xUpEQbR 00:19:54.448 20:43:08 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.9I2xUpEQbR 00:19:54.448 20:43:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9I2xUpEQbR 00:19:54.448 20:43:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9I2xUpEQbR 00:19:54.448 20:43:08 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:54.448 20:43:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:54.705 nvme0n1 00:19:54.705 20:43:09 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:19:54.705 20:43:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:54.705 20:43:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:54.963 20:43:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:54.963 20:43:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:54.963 20:43:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:54.963 20:43:09 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:19:54.963 20:43:09 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:19:54.963 20:43:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:55.220 20:43:09 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:19:55.220 20:43:09 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:19:55.220 20:43:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:55.220 20:43:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:55.220 20:43:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:55.478 20:43:09 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:19:55.478 20:43:09 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:19:55.478 20:43:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:55.478 20:43:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:55.478 20:43:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:55.478 20:43:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:55.478 20:43:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:55.735 20:43:10 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:19:55.735 20:43:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:55.735 20:43:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:55.993 20:43:10 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:19:55.993 20:43:10 keyring_file -- keyring/file.sh@105 -- # jq length 00:19:55.993 20:43:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:55.993 20:43:10 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:19:55.993 20:43:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9I2xUpEQbR 00:19:55.993 20:43:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9I2xUpEQbR 00:19:56.249 20:43:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Pqa8Kl59Zy 00:19:56.249 20:43:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Pqa8Kl59Zy 00:19:56.506 20:43:10 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:56.506 20:43:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:56.764 nvme0n1 00:19:56.764 20:43:11 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:19:56.764 20:43:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:19:57.022 20:43:11 keyring_file -- keyring/file.sh@113 -- # config='{ 00:19:57.022 "subsystems": [ 00:19:57.022 { 00:19:57.022 "subsystem": "keyring", 00:19:57.022 "config": [ 00:19:57.022 { 00:19:57.022 "method": "keyring_file_add_key", 00:19:57.022 "params": { 00:19:57.022 "name": "key0", 00:19:57.022 "path": "/tmp/tmp.9I2xUpEQbR" 00:19:57.022 } 00:19:57.022 }, 00:19:57.022 { 00:19:57.022 "method": "keyring_file_add_key", 00:19:57.022 "params": { 00:19:57.022 "name": "key1", 00:19:57.022 "path": "/tmp/tmp.Pqa8Kl59Zy" 00:19:57.022 } 00:19:57.022 } 00:19:57.022 ] 00:19:57.022 }, 00:19:57.022 { 00:19:57.022 "subsystem": "iobuf", 00:19:57.022 "config": [ 00:19:57.022 { 00:19:57.022 "method": "iobuf_set_options", 00:19:57.022 "params": { 00:19:57.022 "small_pool_count": 8192, 00:19:57.022 "large_pool_count": 1024, 00:19:57.022 "small_bufsize": 8192, 00:19:57.022 "large_bufsize": 135168, 00:19:57.022 "enable_numa": false 00:19:57.022 } 00:19:57.022 } 00:19:57.022 ] 00:19:57.022 }, 00:19:57.022 { 00:19:57.022 "subsystem": "sock", 00:19:57.022 "config": [ 00:19:57.022 { 00:19:57.022 "method": "sock_set_default_impl", 00:19:57.022 "params": { 00:19:57.022 "impl_name": "uring" 00:19:57.022 } 00:19:57.022 }, 00:19:57.022 { 00:19:57.022 "method": "sock_impl_set_options", 00:19:57.023 "params": { 00:19:57.023 "impl_name": "ssl", 00:19:57.023 "recv_buf_size": 4096, 00:19:57.023 "send_buf_size": 4096, 00:19:57.023 "enable_recv_pipe": true, 00:19:57.023 "enable_quickack": false, 00:19:57.023 "enable_placement_id": 0, 00:19:57.023 "enable_zerocopy_send_server": true, 00:19:57.023 "enable_zerocopy_send_client": false, 00:19:57.023 "zerocopy_threshold": 0, 00:19:57.023 "tls_version": 0, 00:19:57.023 "enable_ktls": false 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "sock_impl_set_options", 00:19:57.023 "params": { 00:19:57.023 "impl_name": "posix", 00:19:57.023 "recv_buf_size": 2097152, 00:19:57.023 "send_buf_size": 2097152, 00:19:57.023 "enable_recv_pipe": true, 00:19:57.023 "enable_quickack": false, 00:19:57.023 "enable_placement_id": 0, 00:19:57.023 "enable_zerocopy_send_server": true, 00:19:57.023 "enable_zerocopy_send_client": false, 00:19:57.023 "zerocopy_threshold": 0, 00:19:57.023 "tls_version": 0, 00:19:57.023 "enable_ktls": false 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "sock_impl_set_options", 00:19:57.023 "params": { 00:19:57.023 "impl_name": "uring", 00:19:57.023 "recv_buf_size": 2097152, 00:19:57.023 "send_buf_size": 2097152, 00:19:57.023 "enable_recv_pipe": true, 00:19:57.023 "enable_quickack": false, 00:19:57.023 "enable_placement_id": 0, 00:19:57.023 "enable_zerocopy_send_server": false, 00:19:57.023 "enable_zerocopy_send_client": false, 00:19:57.023 "zerocopy_threshold": 0, 00:19:57.023 "tls_version": 0, 00:19:57.023 "enable_ktls": false 00:19:57.023 } 00:19:57.023 } 00:19:57.023 ] 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "subsystem": "vmd", 00:19:57.023 "config": [] 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "subsystem": "accel", 00:19:57.023 "config": [ 00:19:57.023 { 00:19:57.023 "method": "accel_set_options", 00:19:57.023 "params": { 00:19:57.023 "small_cache_size": 128, 00:19:57.023 "large_cache_size": 16, 00:19:57.023 "task_count": 2048, 00:19:57.023 "sequence_count": 2048, 00:19:57.023 "buf_count": 2048 00:19:57.023 } 00:19:57.023 } 00:19:57.023 ] 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "subsystem": "bdev", 00:19:57.023 "config": [ 00:19:57.023 { 00:19:57.023 "method": "bdev_set_options", 00:19:57.023 "params": { 00:19:57.023 "bdev_io_pool_size": 65535, 00:19:57.023 "bdev_io_cache_size": 256, 00:19:57.023 "bdev_auto_examine": true, 00:19:57.023 "iobuf_small_cache_size": 128, 00:19:57.023 "iobuf_large_cache_size": 16 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "bdev_raid_set_options", 00:19:57.023 "params": { 00:19:57.023 "process_window_size_kb": 1024, 00:19:57.023 "process_max_bandwidth_mb_sec": 0 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "bdev_iscsi_set_options", 00:19:57.023 "params": { 00:19:57.023 "timeout_sec": 30 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "bdev_nvme_set_options", 00:19:57.023 "params": { 00:19:57.023 "action_on_timeout": "none", 00:19:57.023 "timeout_us": 0, 00:19:57.023 "timeout_admin_us": 0, 00:19:57.023 "keep_alive_timeout_ms": 10000, 00:19:57.023 "arbitration_burst": 0, 00:19:57.023 "low_priority_weight": 0, 00:19:57.023 "medium_priority_weight": 0, 00:19:57.023 "high_priority_weight": 0, 00:19:57.023 "nvme_adminq_poll_period_us": 10000, 00:19:57.023 "nvme_ioq_poll_period_us": 0, 00:19:57.023 "io_queue_requests": 512, 00:19:57.023 "delay_cmd_submit": true, 00:19:57.023 "transport_retry_count": 4, 00:19:57.023 "bdev_retry_count": 3, 00:19:57.023 "transport_ack_timeout": 0, 00:19:57.023 "ctrlr_loss_timeout_sec": 0, 00:19:57.023 "reconnect_delay_sec": 0, 00:19:57.023 "fast_io_fail_timeout_sec": 0, 00:19:57.023 "disable_auto_failback": false, 00:19:57.023 "generate_uuids": false, 00:19:57.023 "transport_tos": 0, 00:19:57.023 "nvme_error_stat": false, 00:19:57.023 "rdma_srq_size": 0, 00:19:57.023 "io_path_stat": false, 00:19:57.023 "allow_accel_sequence": false, 00:19:57.023 "rdma_max_cq_size": 0, 00:19:57.023 "rdma_cm_event_timeout_ms": 0, 00:19:57.023 "dhchap_digests": [ 00:19:57.023 "sha256", 00:19:57.023 "sha384", 00:19:57.023 "sha512" 00:19:57.023 ], 00:19:57.023 "dhchap_dhgroups": [ 00:19:57.023 "null", 00:19:57.023 "ffdhe2048", 00:19:57.023 "ffdhe3072", 00:19:57.023 "ffdhe4096", 00:19:57.023 "ffdhe6144", 00:19:57.023 "ffdhe8192" 00:19:57.023 ] 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "bdev_nvme_attach_controller", 00:19:57.023 "params": { 00:19:57.023 "name": "nvme0", 00:19:57.023 "trtype": "TCP", 00:19:57.023 "adrfam": "IPv4", 00:19:57.023 "traddr": "127.0.0.1", 00:19:57.023 "trsvcid": "4420", 00:19:57.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:57.023 "prchk_reftag": false, 00:19:57.023 "prchk_guard": false, 00:19:57.023 "ctrlr_loss_timeout_sec": 0, 00:19:57.023 "reconnect_delay_sec": 0, 00:19:57.023 "fast_io_fail_timeout_sec": 0, 00:19:57.023 "psk": "key0", 00:19:57.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:57.023 "hdgst": false, 00:19:57.023 "ddgst": false, 00:19:57.023 "multipath": "multipath" 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "bdev_nvme_set_hotplug", 00:19:57.023 "params": { 00:19:57.023 "period_us": 100000, 00:19:57.023 "enable": false 00:19:57.023 } 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "method": "bdev_wait_for_examine" 00:19:57.023 } 00:19:57.023 ] 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "subsystem": "nbd", 00:19:57.023 "config": [] 00:19:57.023 } 00:19:57.023 ] 00:19:57.023 }' 00:19:57.023 20:43:11 keyring_file -- keyring/file.sh@115 -- # killprocess 84325 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84325 ']' 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84325 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84325 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:57.023 killing process with pid 84325 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84325' 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@973 -- # kill 84325 00:19:57.023 Received shutdown signal, test time was about 1.000000 seconds 00:19:57.023 00:19:57.023 Latency(us) 00:19:57.023 [2024-11-26T20:43:11.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.023 [2024-11-26T20:43:11.578Z] =================================================================================================================== 00:19:57.023 [2024-11-26T20:43:11.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.023 20:43:11 keyring_file -- common/autotest_common.sh@978 -- # wait 84325 00:19:57.282 20:43:11 keyring_file -- keyring/file.sh@118 -- # bperfpid=84554 00:19:57.282 20:43:11 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:19:57.282 20:43:11 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84554 /var/tmp/bperf.sock 00:19:57.282 20:43:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84554 ']' 00:19:57.282 20:43:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:57.282 20:43:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.282 20:43:11 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:19:57.282 "subsystems": [ 00:19:57.282 { 00:19:57.282 "subsystem": "keyring", 00:19:57.282 "config": [ 00:19:57.282 { 00:19:57.282 "method": "keyring_file_add_key", 00:19:57.282 "params": { 00:19:57.282 "name": "key0", 00:19:57.282 "path": "/tmp/tmp.9I2xUpEQbR" 00:19:57.282 } 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "method": "keyring_file_add_key", 00:19:57.282 "params": { 00:19:57.282 "name": "key1", 00:19:57.282 "path": "/tmp/tmp.Pqa8Kl59Zy" 00:19:57.282 } 00:19:57.282 } 00:19:57.282 ] 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "subsystem": "iobuf", 00:19:57.282 "config": [ 00:19:57.282 { 00:19:57.282 "method": "iobuf_set_options", 00:19:57.282 "params": { 00:19:57.282 "small_pool_count": 8192, 00:19:57.282 "large_pool_count": 1024, 00:19:57.282 "small_bufsize": 8192, 00:19:57.282 "large_bufsize": 135168, 00:19:57.282 "enable_numa": false 00:19:57.282 } 00:19:57.282 } 00:19:57.282 ] 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "subsystem": "sock", 00:19:57.282 "config": [ 00:19:57.282 { 00:19:57.282 "method": "sock_set_default_impl", 00:19:57.282 "params": { 00:19:57.282 "impl_name": "uring" 00:19:57.282 } 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "method": "sock_impl_set_options", 00:19:57.282 "params": { 00:19:57.282 "impl_name": "ssl", 00:19:57.282 "recv_buf_size": 4096, 00:19:57.282 "send_buf_size": 4096, 00:19:57.282 "enable_recv_pipe": true, 00:19:57.282 "enable_quickack": false, 00:19:57.282 "enable_placement_id": 0, 00:19:57.282 "enable_zerocopy_send_server": true, 00:19:57.282 "enable_zerocopy_send_client": false, 00:19:57.282 "zerocopy_threshold": 0, 00:19:57.282 "tls_version": 0, 00:19:57.282 "enable_ktls": false 00:19:57.282 } 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "method": "sock_impl_set_options", 00:19:57.282 "params": { 00:19:57.282 "impl_name": "posix", 00:19:57.282 "recv_buf_size": 2097152, 00:19:57.282 "send_buf_size": 2097152, 00:19:57.282 "enable_recv_pipe": true, 00:19:57.282 "enable_quickack": false, 00:19:57.282 "enable_placement_id": 0, 00:19:57.282 "enable_zerocopy_send_server": true, 00:19:57.282 "enable_zerocopy_send_client": false, 00:19:57.282 "zerocopy_threshold": 0, 00:19:57.282 "tls_version": 0, 00:19:57.282 "enable_ktls": false 00:19:57.282 } 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "method": "sock_impl_set_options", 00:19:57.282 "params": { 00:19:57.282 "impl_name": "uring", 00:19:57.282 "recv_buf_size": 2097152, 00:19:57.282 "send_buf_size": 2097152, 00:19:57.282 "enable_recv_pipe": true, 00:19:57.282 "enable_quickack": false, 00:19:57.282 "enable_placement_id": 0, 00:19:57.282 "enable_zerocopy_send_server": false, 00:19:57.282 "enable_zerocopy_send_client": false, 00:19:57.282 "zerocopy_threshold": 0, 00:19:57.282 "tls_version": 0, 00:19:57.282 "enable_ktls": false 00:19:57.282 } 00:19:57.282 } 00:19:57.282 ] 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "subsystem": "vmd", 00:19:57.282 "config": [] 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "subsystem": "accel", 00:19:57.282 "config": [ 00:19:57.282 { 00:19:57.282 "method": "accel_set_options", 00:19:57.282 "params": { 00:19:57.282 "small_cache_size": 128, 00:19:57.282 "large_cache_size": 16, 00:19:57.282 "task_count": 2048, 00:19:57.282 "sequence_count": 2048, 00:19:57.282 "buf_count": 2048 00:19:57.282 } 00:19:57.282 } 00:19:57.282 ] 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "subsystem": "bdev", 00:19:57.282 "config": [ 00:19:57.282 { 00:19:57.282 "method": "bdev_set_options", 00:19:57.282 "params": { 00:19:57.282 "bdev_io_pool_size": 65535, 00:19:57.282 "bdev_io_cache_size": 256, 00:19:57.282 "bdev_auto_examine": true, 00:19:57.282 "iobuf_small_cache_size": 128, 00:19:57.282 "iobuf_large_cache_size": 16 00:19:57.282 } 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "method": "bdev_raid_set_options", 00:19:57.282 "params": { 00:19:57.282 "process_window_size_kb": 1024, 00:19:57.282 "process_max_bandwidth_mb_sec": 0 00:19:57.282 } 00:19:57.282 }, 00:19:57.282 { 00:19:57.283 "method": "bdev_iscsi_set_options", 00:19:57.283 "params": { 00:19:57.283 "timeout_sec": 30 00:19:57.283 } 00:19:57.283 }, 00:19:57.283 { 00:19:57.283 "method": "bdev_nvme_set_options", 00:19:57.283 "params": { 00:19:57.283 "action_on_timeout": "none", 00:19:57.283 "timeout_us": 0, 00:19:57.283 "timeout_admin_us": 0, 00:19:57.283 "keep_alive_timeout_ms": 10000, 00:19:57.283 "arbitration_burst": 0, 00:19:57.283 "low_priority_weight": 0, 00:19:57.283 "medium_priority_weight": 0, 00:19:57.283 "high_priority_weight": 0, 00:19:57.283 "nvme_adminq_poll_period_us": 10000, 00:19:57.283 "nvme_ioq_poll_period_us": 0, 00:19:57.283 "io_queue_requests": 512, 00:19:57.283 "delay_cmd_submit": true, 00:19:57.283 "transport_retry_count": 4, 00:19:57.283 "bdev_retry_count": 3, 00:19:57.283 "transport_ack_timeout": 0, 00:19:57.283 "ctrlr_loss_timeout_sec": 0, 00:19:57.283 "reconnect_delay_sec": 0, 00:19:57.283 "fast_io_fail_timeout_sec": 0, 00:19:57.283 "disable_auto_failback": false, 00:19:57.283 "generate_uuids": false, 00:19:57.283 "transport_tos": 0, 00:19:57.283 "nvme_error_stat": false, 00:19:57.283 "rdma_srq_size": 0, 00:19:57.283 "io_path_stat": false, 00:19:57.283 "allow_accel_sequence": false, 00:19:57.283 "rdma_max_cq_size": 0, 00:19:57.283 "rdma_cm_event_timeout_ms": 0, 00:19:57.283 "dhchap_digests": [ 00:19:57.283 "sha256", 00:19:57.283 "sha384", 00:19:57.283 "sha512" 00:19:57.283 ], 00:19:57.283 "dhchap_dhgroups": [ 00:19:57.283 "null", 00:19:57.283 "ffdhe2048", 00:19:57.283 "ffdhe3072", 00:19:57.283 "ffdhe4096", 00:19:57.283 "ffdhe6144", 00:19:57.283 "ffdhe8192" 00:19:57.283 ] 00:19:57.283 } 00:19:57.283 }, 00:19:57.283 { 00:19:57.283 "method": "bdev_nvme_attach_controller", 00:19:57.283 "params": { 00:19:57.283 "name": "nvme0", 00:19:57.283 "trtype": "TCP", 00:19:57.283 "adrfam": "IPv4", 00:19:57.283 "traddr": "127.0.0.1", 00:19:57.283 "trsvcid": "4420", 00:19:57.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:57.283 "prchk_reftag": false, 00:19:57.283 "prchk_guard": false, 00:19:57.283 "ctrlr_loss_timeout_sec": 0, 00:19:57.283 "reconnect_delay_sec": 0, 00:19:57.283 "fast_io_fail_timeout_sec": 0, 00:19:57.283 "psk": "key0", 00:19:57.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:57.283 "hdgst": false, 00:19:57.283 "ddgst": false, 00:19:57.283 "multipath": "multipath" 00:19:57.283 } 00:19:57.283 }, 00:19:57.283 { 00:19:57.283 "method": "bdev_nvme_set_hotplug", 00:19:57.283 "params": { 00:19:57.283 "period_us": 100000, 00:19:57.283 "enable": false 00:19:57.283 } 00:19:57.283 }, 00:19:57.283 { 00:19:57.283 "method": "bdev_wait_for_examine" 00:19:57.283 } 00:19:57.283 ] 00:19:57.283 }, 00:19:57.283 { 00:19:57.283 "subsystem": "nbd", 00:19:57.283 "config": [] 00:19:57.283 } 00:19:57.283 ] 00:19:57.283 }' 00:19:57.283 20:43:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:57.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:57.283 20:43:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.283 20:43:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:57.283 [2024-11-26 20:43:11.606408] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:19:57.283 [2024-11-26 20:43:11.606463] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84554 ] 00:19:57.283 [2024-11-26 20:43:11.745338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.283 [2024-11-26 20:43:11.778327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.540 [2024-11-26 20:43:11.889024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:57.540 [2024-11-26 20:43:11.933193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.105 20:43:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.105 20:43:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:58.105 20:43:12 keyring_file -- keyring/file.sh@121 -- # jq length 00:19:58.105 20:43:12 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:19:58.105 20:43:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:58.363 20:43:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:19:58.363 20:43:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:58.363 20:43:12 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:19:58.363 20:43:12 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:58.363 20:43:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:58.620 20:43:13 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:19:58.620 20:43:13 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:19:58.620 20:43:13 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:19:58.620 20:43:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:19:58.877 20:43:13 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:19:58.877 20:43:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:19:58.877 20:43:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9I2xUpEQbR /tmp/tmp.Pqa8Kl59Zy 00:19:58.877 20:43:13 keyring_file -- keyring/file.sh@20 -- # killprocess 84554 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84554 ']' 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84554 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84554 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:58.877 killing process with pid 84554 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84554' 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@973 -- # kill 84554 00:19:58.877 Received shutdown signal, test time was about 1.000000 seconds 00:19:58.877 00:19:58.877 Latency(us) 00:19:58.877 [2024-11-26T20:43:13.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.877 [2024-11-26T20:43:13.432Z] =================================================================================================================== 00:19:58.877 [2024-11-26T20:43:13.432Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.877 20:43:13 keyring_file -- common/autotest_common.sh@978 -- # wait 84554 00:19:58.878 20:43:13 keyring_file -- keyring/file.sh@21 -- # killprocess 84309 00:19:58.878 20:43:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84309 ']' 00:19:58.878 20:43:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84309 00:19:58.878 20:43:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:19:58.878 20:43:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.878 20:43:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84309 00:19:59.135 20:43:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.135 20:43:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.135 killing process with pid 84309 00:19:59.135 20:43:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84309' 00:19:59.135 20:43:13 keyring_file -- common/autotest_common.sh@973 -- # kill 84309 00:19:59.135 20:43:13 keyring_file -- common/autotest_common.sh@978 -- # wait 84309 00:19:59.135 00:19:59.135 real 0m12.970s 00:19:59.135 user 0m32.033s 00:19:59.135 sys 0m2.104s 00:19:59.135 20:43:13 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.135 20:43:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:59.135 ************************************ 00:19:59.135 END TEST keyring_file 00:19:59.135 ************************************ 00:19:59.135 20:43:13 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:19:59.135 20:43:13 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:19:59.135 20:43:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:59.135 20:43:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.135 20:43:13 -- common/autotest_common.sh@10 -- # set +x 00:19:59.135 ************************************ 00:19:59.135 START TEST keyring_linux 00:19:59.135 ************************************ 00:19:59.135 20:43:13 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:19:59.135 Joined session keyring: 699989751 00:19:59.393 * Looking for test storage... 00:19:59.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@345 -- # : 1 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@368 -- # return 0 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:59.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.393 --rc genhtml_branch_coverage=1 00:19:59.393 --rc genhtml_function_coverage=1 00:19:59.393 --rc genhtml_legend=1 00:19:59.393 --rc geninfo_all_blocks=1 00:19:59.393 --rc geninfo_unexecuted_blocks=1 00:19:59.393 00:19:59.393 ' 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:59.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.393 --rc genhtml_branch_coverage=1 00:19:59.393 --rc genhtml_function_coverage=1 00:19:59.393 --rc genhtml_legend=1 00:19:59.393 --rc geninfo_all_blocks=1 00:19:59.393 --rc geninfo_unexecuted_blocks=1 00:19:59.393 00:19:59.393 ' 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:59.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.393 --rc genhtml_branch_coverage=1 00:19:59.393 --rc genhtml_function_coverage=1 00:19:59.393 --rc genhtml_legend=1 00:19:59.393 --rc geninfo_all_blocks=1 00:19:59.393 --rc geninfo_unexecuted_blocks=1 00:19:59.393 00:19:59.393 ' 00:19:59.393 20:43:13 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:59.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.393 --rc genhtml_branch_coverage=1 00:19:59.393 --rc genhtml_function_coverage=1 00:19:59.393 --rc genhtml_legend=1 00:19:59.393 --rc geninfo_all_blocks=1 00:19:59.393 --rc geninfo_unexecuted_blocks=1 00:19:59.393 00:19:59.393 ' 00:19:59.393 20:43:13 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38d6bd30-54c5-4858-a242-ab15764fb2d9 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=38d6bd30-54c5-4858-a242-ab15764fb2d9 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.393 20:43:13 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.393 20:43:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.393 20:43:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.393 20:43:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.393 20:43:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:19:59.393 20:43:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:59.393 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:19:59.393 20:43:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:19:59.393 20:43:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:19:59.393 20:43:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:19:59.393 20:43:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:19:59.393 20:43:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:19:59.393 20:43:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:19:59.393 20:43:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:19:59.393 20:43:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:19:59.393 /tmp/:spdk-test:key0 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:19:59.394 20:43:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:19:59.394 20:43:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:19:59.394 20:43:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.394 20:43:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:59.394 20:43:13 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:19:59.394 20:43:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:19:59.394 20:43:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:19:59.394 /tmp/:spdk-test:key1 00:19:59.394 20:43:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:19:59.394 20:43:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84676 00:19:59.394 20:43:13 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:59.394 20:43:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84676 00:19:59.394 20:43:13 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84676 ']' 00:19:59.394 20:43:13 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.394 20:43:13 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.394 20:43:13 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.394 20:43:13 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.394 20:43:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:59.394 [2024-11-26 20:43:13.944496] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:19:59.394 [2024-11-26 20:43:13.944556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84676 ] 00:19:59.651 [2024-11-26 20:43:14.075903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.651 [2024-11-26 20:43:14.111820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.651 [2024-11-26 20:43:14.155680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:00.585 20:43:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:00.585 [2024-11-26 20:43:14.830792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.585 null0 00:20:00.585 [2024-11-26 20:43:14.862760] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.585 [2024-11-26 20:43:14.862899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.585 20:43:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:00.585 828729055 00:20:00.585 588874371 00:20:00.585 20:43:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:00.585 20:43:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84693 00:20:00.585 20:43:14 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:00.585 20:43:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84693 /var/tmp/bperf.sock 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84693 ']' 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:00.585 20:43:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:00.586 20:43:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.586 20:43:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:00.586 [2024-11-26 20:43:14.922449] Starting SPDK v25.01-pre git sha1 97329b16b / DPDK 24.03.0 initialization... 00:20:00.586 [2024-11-26 20:43:14.922513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84693 ] 00:20:00.586 [2024-11-26 20:43:15.051165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.586 [2024-11-26 20:43:15.083728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.522 20:43:15 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.522 20:43:15 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:01.522 20:43:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:01.522 20:43:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:01.522 20:43:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:01.522 20:43:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:01.778 [2024-11-26 20:43:16.269473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:01.778 20:43:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:01.778 20:43:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:02.035 [2024-11-26 20:43:16.492316] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.035 nvme0n1 00:20:02.035 20:43:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:02.035 20:43:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:02.035 20:43:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:02.035 20:43:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:02.035 20:43:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:02.035 20:43:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:02.293 20:43:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:02.293 20:43:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:02.293 20:43:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:02.293 20:43:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:02.293 20:43:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:02.293 20:43:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:02.293 20:43:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:02.550 20:43:16 keyring_linux -- keyring/linux.sh@25 -- # sn=828729055 00:20:02.550 20:43:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:02.550 20:43:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:02.550 20:43:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 828729055 == \8\2\8\7\2\9\0\5\5 ]] 00:20:02.550 20:43:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 828729055 00:20:02.550 20:43:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:02.550 20:43:16 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:02.550 Running I/O for 1 seconds... 00:20:03.925 22823.00 IOPS, 89.15 MiB/s 00:20:03.925 Latency(us) 00:20:03.925 [2024-11-26T20:43:18.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.925 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:03.925 nvme0n1 : 1.01 22826.62 89.17 0.00 0.00 5590.44 4688.34 9679.16 00:20:03.925 [2024-11-26T20:43:18.480Z] =================================================================================================================== 00:20:03.925 [2024-11-26T20:43:18.480Z] Total : 22826.62 89.17 0.00 0.00 5590.44 4688.34 9679.16 00:20:03.925 { 00:20:03.925 "results": [ 00:20:03.925 { 00:20:03.925 "job": "nvme0n1", 00:20:03.925 "core_mask": "0x2", 00:20:03.925 "workload": "randread", 00:20:03.925 "status": "finished", 00:20:03.925 "queue_depth": 128, 00:20:03.925 "io_size": 4096, 00:20:03.925 "runtime": 1.005449, 00:20:03.925 "iops": 22826.617759826706, 00:20:03.925 "mibps": 89.16647562432307, 00:20:03.925 "io_failed": 0, 00:20:03.925 "io_timeout": 0, 00:20:03.925 "avg_latency_us": 5590.437947868871, 00:20:03.925 "min_latency_us": 4688.344615384615, 00:20:03.925 "max_latency_us": 9679.163076923078 00:20:03.925 } 00:20:03.925 ], 00:20:03.925 "core_count": 1 00:20:03.925 } 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:03.925 20:43:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:03.925 20:43:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:03.925 20:43:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:03.925 20:43:18 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:20:03.925 20:43:18 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:03.925 20:43:18 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:03.925 20:43:18 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.925 20:43:18 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:04.184 20:43:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:04.184 [2024-11-26 20:43:18.664182] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:04.184 [2024-11-26 20:43:18.664874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20645d0 (107): Transport endpoint is not connected 00:20:04.184 [2024-11-26 20:43:18.665868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20645d0 (9): Bad file descriptor 00:20:04.184 [2024-11-26 20:43:18.666867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:04.184 [2024-11-26 20:43:18.666884] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:04.184 [2024-11-26 20:43:18.666889] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:04.184 [2024-11-26 20:43:18.666895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:04.184 request: 00:20:04.184 { 00:20:04.184 "name": "nvme0", 00:20:04.184 "trtype": "tcp", 00:20:04.184 "traddr": "127.0.0.1", 00:20:04.184 "adrfam": "ipv4", 00:20:04.184 "trsvcid": "4420", 00:20:04.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:04.184 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:04.184 "prchk_reftag": false, 00:20:04.184 "prchk_guard": false, 00:20:04.184 "hdgst": false, 00:20:04.184 "ddgst": false, 00:20:04.184 "psk": ":spdk-test:key1", 00:20:04.184 "allow_unrecognized_csi": false, 00:20:04.184 "method": "bdev_nvme_attach_controller", 00:20:04.184 "req_id": 1 00:20:04.184 } 00:20:04.184 Got JSON-RPC error response 00:20:04.184 response: 00:20:04.184 { 00:20:04.184 "code": -5, 00:20:04.184 "message": "Input/output error" 00:20:04.184 } 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@33 -- # sn=828729055 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 828729055 00:20:04.184 1 links removed 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@33 -- # sn=588874371 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 588874371 00:20:04.184 1 links removed 00:20:04.184 20:43:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84693 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84693 ']' 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84693 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84693 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.184 killing process with pid 84693 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84693' 00:20:04.184 20:43:18 keyring_linux -- common/autotest_common.sh@973 -- # kill 84693 00:20:04.184 Received shutdown signal, test time was about 1.000000 seconds 00:20:04.184 00:20:04.184 Latency(us) 00:20:04.184 [2024-11-26T20:43:18.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.185 [2024-11-26T20:43:18.740Z] =================================================================================================================== 00:20:04.185 [2024-11-26T20:43:18.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.185 20:43:18 keyring_linux -- common/autotest_common.sh@978 -- # wait 84693 00:20:04.443 20:43:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84676 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84676 ']' 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84676 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84676 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84676' 00:20:04.443 killing process with pid 84676 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@973 -- # kill 84676 00:20:04.443 20:43:18 keyring_linux -- common/autotest_common.sh@978 -- # wait 84676 00:20:04.701 00:20:04.701 real 0m5.351s 00:20:04.701 user 0m10.383s 00:20:04.701 sys 0m1.133s 00:20:04.701 20:43:19 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.701 ************************************ 00:20:04.701 END TEST keyring_linux 00:20:04.701 ************************************ 00:20:04.701 20:43:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:04.701 20:43:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:04.701 20:43:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:04.701 20:43:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:04.701 20:43:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:04.701 20:43:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:04.701 20:43:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:04.701 20:43:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:04.701 20:43:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.701 20:43:19 -- common/autotest_common.sh@10 -- # set +x 00:20:04.701 20:43:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:04.701 20:43:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:04.701 20:43:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:04.701 20:43:19 -- common/autotest_common.sh@10 -- # set +x 00:20:06.071 INFO: APP EXITING 00:20:06.071 INFO: killing all VMs 00:20:06.071 INFO: killing vhost app 00:20:06.071 INFO: EXIT DONE 00:20:06.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.328 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:06.328 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:06.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.891 Cleaning 00:20:06.891 Removing: /var/run/dpdk/spdk0/config 00:20:06.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:06.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:06.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:06.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:06.891 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:06.891 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:06.891 Removing: /var/run/dpdk/spdk1/config 00:20:06.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:06.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:06.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:06.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:06.891 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:06.891 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:06.891 Removing: /var/run/dpdk/spdk2/config 00:20:06.891 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:06.891 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:06.891 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:06.891 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:06.891 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:06.891 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:06.891 Removing: /var/run/dpdk/spdk3/config 00:20:06.891 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:06.891 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:06.891 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:06.891 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:06.892 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:06.892 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:06.892 Removing: /var/run/dpdk/spdk4/config 00:20:06.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:06.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:06.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:06.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:06.892 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:06.892 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:06.892 Removing: /dev/shm/nvmf_trace.0 00:20:06.892 Removing: /dev/shm/spdk_tgt_trace.pid56270 00:20:06.892 Removing: /var/run/dpdk/spdk0 00:20:06.892 Removing: /var/run/dpdk/spdk1 00:20:06.892 Removing: /var/run/dpdk/spdk2 00:20:06.892 Removing: /var/run/dpdk/spdk3 00:20:06.892 Removing: /var/run/dpdk/spdk4 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56128 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56270 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56463 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56545 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56577 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56681 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56699 00:20:07.149 Removing: /var/run/dpdk/spdk_pid56833 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57018 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57166 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57240 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57319 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57417 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57497 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57530 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57565 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57635 00:20:07.149 Removing: /var/run/dpdk/spdk_pid57729 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58146 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58193 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58233 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58247 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58298 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58308 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58364 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58380 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58420 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58438 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58478 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58496 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58621 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58652 00:20:07.149 Removing: /var/run/dpdk/spdk_pid58739 00:20:07.149 Removing: /var/run/dpdk/spdk_pid59068 00:20:07.149 Removing: /var/run/dpdk/spdk_pid59085 00:20:07.149 Removing: /var/run/dpdk/spdk_pid59116 00:20:07.149 Removing: /var/run/dpdk/spdk_pid59124 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59140 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59153 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59172 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59182 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59201 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59220 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59230 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59249 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59267 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59278 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59297 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59311 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59326 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59345 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59363 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59374 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59410 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59424 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59453 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59525 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59554 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59563 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59592 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59601 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59609 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59651 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59665 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59693 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59703 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59712 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59722 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59731 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59741 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59749 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59754 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59783 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59815 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59819 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59853 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59858 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59865 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59906 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59917 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59944 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59951 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59959 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59961 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59974 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59976 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59983 00:20:07.150 Removing: /var/run/dpdk/spdk_pid59991 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60073 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60115 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60233 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60263 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60339 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60359 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60381 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60401 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60427 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60448 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60526 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60542 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60586 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60653 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60704 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60727 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60826 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60874 00:20:07.150 Removing: /var/run/dpdk/spdk_pid60907 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61133 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61225 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61254 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61283 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61317 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61350 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61383 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61414 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61800 00:20:07.150 Removing: /var/run/dpdk/spdk_pid61838 00:20:07.150 Removing: /var/run/dpdk/spdk_pid62163 00:20:07.150 Removing: /var/run/dpdk/spdk_pid62625 00:20:07.407 Removing: /var/run/dpdk/spdk_pid62881 00:20:07.407 Removing: /var/run/dpdk/spdk_pid63717 00:20:07.407 Removing: /var/run/dpdk/spdk_pid64757 00:20:07.407 Removing: /var/run/dpdk/spdk_pid64873 00:20:07.407 Removing: /var/run/dpdk/spdk_pid64942 00:20:07.407 Removing: /var/run/dpdk/spdk_pid66336 00:20:07.407 Removing: /var/run/dpdk/spdk_pid66651 00:20:07.407 Removing: /var/run/dpdk/spdk_pid69996 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70339 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70452 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70588 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70616 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70639 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70668 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70762 00:20:07.407 Removing: /var/run/dpdk/spdk_pid70897 00:20:07.407 Removing: /var/run/dpdk/spdk_pid71050 00:20:07.407 Removing: /var/run/dpdk/spdk_pid71126 00:20:07.407 Removing: /var/run/dpdk/spdk_pid71314 00:20:07.407 Removing: /var/run/dpdk/spdk_pid71392 00:20:07.407 Removing: /var/run/dpdk/spdk_pid71479 00:20:07.407 Removing: /var/run/dpdk/spdk_pid71832 00:20:07.407 Removing: /var/run/dpdk/spdk_pid72253 00:20:07.407 Removing: /var/run/dpdk/spdk_pid72254 00:20:07.407 Removing: /var/run/dpdk/spdk_pid72255 00:20:07.407 Removing: /var/run/dpdk/spdk_pid72518 00:20:07.407 Removing: /var/run/dpdk/spdk_pid72781 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73168 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73170 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73494 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73508 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73522 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73558 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73563 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73915 00:20:07.407 Removing: /var/run/dpdk/spdk_pid73958 00:20:07.408 Removing: /var/run/dpdk/spdk_pid74290 00:20:07.408 Removing: /var/run/dpdk/spdk_pid74492 00:20:07.408 Removing: /var/run/dpdk/spdk_pid74919 00:20:07.408 Removing: /var/run/dpdk/spdk_pid75471 00:20:07.408 Removing: /var/run/dpdk/spdk_pid76310 00:20:07.408 Removing: /var/run/dpdk/spdk_pid76951 00:20:07.408 Removing: /var/run/dpdk/spdk_pid76953 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79019 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79074 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79135 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79196 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79312 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79371 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79422 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79477 00:20:07.408 Removing: /var/run/dpdk/spdk_pid79843 00:20:07.408 Removing: /var/run/dpdk/spdk_pid81050 00:20:07.408 Removing: /var/run/dpdk/spdk_pid81192 00:20:07.408 Removing: /var/run/dpdk/spdk_pid81439 00:20:07.408 Removing: /var/run/dpdk/spdk_pid82040 00:20:07.408 Removing: /var/run/dpdk/spdk_pid82202 00:20:07.408 Removing: /var/run/dpdk/spdk_pid82358 00:20:07.408 Removing: /var/run/dpdk/spdk_pid82455 00:20:07.408 Removing: /var/run/dpdk/spdk_pid82632 00:20:07.408 Removing: /var/run/dpdk/spdk_pid82742 00:20:07.408 Removing: /var/run/dpdk/spdk_pid83447 00:20:07.408 Removing: /var/run/dpdk/spdk_pid83481 00:20:07.408 Removing: /var/run/dpdk/spdk_pid83522 00:20:07.408 Removing: /var/run/dpdk/spdk_pid83766 00:20:07.408 Removing: /var/run/dpdk/spdk_pid83806 00:20:07.408 Removing: /var/run/dpdk/spdk_pid83842 00:20:07.408 Removing: /var/run/dpdk/spdk_pid84309 00:20:07.408 Removing: /var/run/dpdk/spdk_pid84325 00:20:07.408 Removing: /var/run/dpdk/spdk_pid84554 00:20:07.408 Removing: /var/run/dpdk/spdk_pid84676 00:20:07.408 Removing: /var/run/dpdk/spdk_pid84693 00:20:07.408 Clean 00:20:07.408 20:43:21 -- common/autotest_common.sh@1453 -- # return 0 00:20:07.408 20:43:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:07.408 20:43:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.408 20:43:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.408 20:43:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:07.408 20:43:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.408 20:43:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.408 20:43:21 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:07.665 20:43:21 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:07.665 20:43:21 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:07.665 20:43:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:07.665 20:43:21 -- spdk/autotest.sh@398 -- # hostname 00:20:07.665 20:43:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:07.665 geninfo: WARNING: invalid characters removed from testname! 00:20:29.585 20:43:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:32.138 20:43:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:34.035 20:43:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:36.559 20:43:50 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:38.462 20:43:52 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:41.001 20:43:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:42.897 20:43:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:42.897 20:43:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:42.897 20:43:57 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:42.897 20:43:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:42.897 20:43:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:42.897 20:43:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:42.897 + [[ -n 4982 ]] 00:20:42.897 + sudo kill 4982 00:20:42.907 [Pipeline] } 00:20:42.925 [Pipeline] // timeout 00:20:42.929 [Pipeline] } 00:20:42.946 [Pipeline] // stage 00:20:42.950 [Pipeline] } 00:20:42.968 [Pipeline] // catchError 00:20:42.979 [Pipeline] stage 00:20:42.981 [Pipeline] { (Stop VM) 00:20:42.993 [Pipeline] sh 00:20:43.315 + vagrant halt 00:20:45.852 ==> default: Halting domain... 00:20:49.145 [Pipeline] sh 00:20:49.424 + vagrant destroy -f 00:20:51.949 ==> default: Removing domain... 00:20:51.961 [Pipeline] sh 00:20:52.237 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:52.246 [Pipeline] } 00:20:52.259 [Pipeline] // stage 00:20:52.264 [Pipeline] } 00:20:52.277 [Pipeline] // dir 00:20:52.281 [Pipeline] } 00:20:52.296 [Pipeline] // wrap 00:20:52.302 [Pipeline] } 00:20:52.314 [Pipeline] // catchError 00:20:52.323 [Pipeline] stage 00:20:52.325 [Pipeline] { (Epilogue) 00:20:52.337 [Pipeline] sh 00:20:52.612 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:57.879 [Pipeline] catchError 00:20:57.881 [Pipeline] { 00:20:57.894 [Pipeline] sh 00:20:58.173 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:58.174 Artifacts sizes are good 00:20:58.183 [Pipeline] } 00:20:58.199 [Pipeline] // catchError 00:20:58.212 [Pipeline] archiveArtifacts 00:20:58.219 Archiving artifacts 00:20:58.332 [Pipeline] cleanWs 00:20:58.344 [WS-CLEANUP] Deleting project workspace... 00:20:58.344 [WS-CLEANUP] Deferred wipeout is used... 00:20:58.350 [WS-CLEANUP] done 00:20:58.351 [Pipeline] } 00:20:58.366 [Pipeline] // stage 00:20:58.371 [Pipeline] } 00:20:58.385 [Pipeline] // node 00:20:58.390 [Pipeline] End of Pipeline 00:20:58.427 Finished: SUCCESS